forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
0aaaM31hLB
Learning Symmetries through Loss Landscape
[ "Ahmed A. A. Elhag", "T. Konstantin Rusch", "Francesco Di Giovanni", "Michael M. Bronstein" ]
Incorporating equivariance as an inductive bias into deep learning architectures, to take advantage of the data symmetry, has been successful in multiple applications such as chemistry and dynamical systems. The build of equivariance architecture, particularly w.r.t. roto-translations, is crucial for effectively modeling geometric graphs and molecules, where the understanding of 3D structures enhances generalization. However, despite their potential, equivariant models often pose challenges due to their high computational complexity. In this paper, we study the capabilities of unconstrained models (which do not build equivariance into the architecture) and how they generalize compared to equivariant models. We show that unconstrained models can learn approximate symmetries by minimizing additional simple equivariance loss. By formulating equivariance as a new learning objective, we can control the level of approximate equivariance in the model. Our method achieves competitive performance compared to equivariant baselines while being 10x faster at inference and 2.5x at training.
[ "Unconstrained models", "equivariant models", "symmetries." ]
Reject
https://openreview.net/pdf?id=0aaaM31hLB
https://openreview.net/forum?id=0aaaM31hLB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y54qxB62nm", "wXscY4my3D", "sWKJLNLZ1w", "s2H2V2WNPB", "mybBkB7DMF", "mcfl6YY10z", "lwARp24n8e", "ljpuaCzi5D", "ktmvapiSUe", "jSbNB67Im9", "hTchgD18V3", "hPIEBDWFtn", "h8fSyDkA43", "gr5rnA1FDf", "auFMbkinMh", "RMAOzZvOJo", "R9nVG8w5XD", "QckKy34pZL", "Pihj0BU18e", "MyvYgpXB6R", "LzvgPcBDN6", "LxVMSytCqc", "JMXsFEarY5", "FHSOwMI63G", "CLAI4I9Vrg", "CB5N698Mqr", "9DsM1v5adw", "62aL3pRP1T", "42nypiBwuM", "1pCA1MWBRK" ], "note_type": [ "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737524101692, 1730673148347, 1732239331392, 1733182682037, 1732242573817, 1732673037560, 1733182182466, 1734763476368, 1733182103776, 1730648015113, 1730120237756, 1732388767791, 1732664929830, 1732664781395, 1732603562462, 1732239382874, 1732540636422, 1732663484313, 1732524665499, 1732511325789, 1732241604780, 1730706900420, 1732492194536, 1732242832855, 1732662734753, 1732321008960, 1732237306480, 1732237782066, 1732662965169, 1732241858892 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11077/Reviewer_JGuC" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Reviewer_JQbk" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Area_Chair_3gmH" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Reviewer_EQn6" ], [ "ICLR.cc/2025/Conference/Submission11077/Reviewer_JQbk" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Reviewer_EQn6" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Reviewer_JGuC" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Reviewer_JQbk" ], [ "ICLR.cc/2025/Conference/Submission11077/Area_Chair_3gmH" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Reviewer_eYaK" ], [ "ICLR.cc/2025/Conference/Submission11077/Reviewer_eYaK" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Reviewer_EQn6" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ], [ "ICLR.cc/2025/Conference/Submission11077/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces an augmented loss function that incorporates a measure of average equivariance, aiming to enhance the prediction of approximately equivariant information. The authors validate this approach on both equivariant and non-equivariant tasks, utilizing transformer and graph neural network architectures. Furthermore, they conduct a visual analysis of the loss landscape, comparing two different architectures: Transformer with the augmented loss and GATr without.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The augmented loss function is generalizable across various architectures.\\n\\n2. The augmented loss function requires relatively few samples to work effectively making it computationally efficient.\", \"weaknesses\": \"The proposed methodology can be described as approximate equivariance, but the paper lacks an adequate background and comparative analysis against existing works on approximate equivariance. This raises concerns about both the novelty and the empirical validation of the approach.\\n\\n1. Novelty: Augmented loss functions enforcing approximate equivariance have been studied (e.g. [1]) including an average measure (e.g. [2]).\\n\\n2. Empirical Support: The paper does not benchmark against other methods that address approximate equivariance (e.g., [1]), nor does it consider theoretically grounded approaches to symmetry breaking (e.g., [3], [4]) or simpler strategies like combining SE3Transformer with MLPs.\\n\\n[1] Kim, Hyunsu, Hyungi Lee, Hongseok Yang, and Juho Lee. \\\"Regularizing towards Soft Equivariance under Mixed Symmetries.\\\" Proceedings of the 40th International Conference on Machine Learning, ICML'23, 2023, pp. 686, JMLR.org.\\n\\n[2] K. Lin, B. Huang, L. M. Collins, K. Bradbury and J. M. Malof, \\\"A simple rotational equivariance loss for generic convolutional segmentation networks: preliminary results,\\\" IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019.\\n\\n[3] Wang, Rui, Robin Walters, and Tess Smidt. \\\"Relaxed Octahedral Group Convolution for Learning Symmetry Breaking in 3D Physical Systems.\\\" NeurIPS 2023 AI for Science Workshop, 2023, https://openreview.net/forum?id=B8EpSHEp9j.\\n\\n[4] Lawrence, Hannah, Vasco Portilheiro, Yan Zhang, and S\\u00e9kou-Oumar Kaba. \\\"Improving Equivariant Networks with Probabilistic Symmetry Breaking.\\\" ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling, 2024, https://openreview.net/forum?id=1VlRaXNMWO.\", \"questions\": \"1. To address fundamental concerns, I recommend a more comprehensive background and analysis, alongside broader empirical comparisons. Specifically, the study should include a wider range of architectures that go beyond strictly equivariant and non-equivariant models, particularly for the motion capture task, which is inherently non-equivariant.\\n\\n2. In practice, if a single randomly sampled group element is used per sample in each training step (as mentioned at the end of Section 3.1), this should be explicitly stated in Section 3.2 where the sampling procedure is discussed and the number of samples M is introduced.\\n\\n3. The paper lacks heuristic, theoretical and/or empirical justification for the choice of one group element per sample per training epoch sufficient. As a result the particular choice of measure is understudied. Moreover, it remains unclear how the equivariance error is measured in Section 6. How many samples are used for this computation?\\n\\n4. It's entirely unclear if the difference in the loss landscape is a result of the augmented loss function or a result of the architectural difference. I would strongly recommend performing comparisons with a fixed architecture.\\n\\n5. The MD17 dataset includes two regression targets: energy and forces. Please clarify in the text which target is being used (likely force regression) and how the results are generated.\\n\\n6. Near the end of Section 6.2, the statement \\\"Best performance is observed at an intermediate level of equivariance...\\\" is confusing. Since the paper modifies the loss function, not the architecture, this needs further explanation for supporting the proposed methodology. Otherwise, the conclusion is simply to not utilize strict equivariant architectures for non-equivariant tasks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their time and all the comments, which have helped us improve our paper and indicate our contribution. We are also open to answer any further questions.\\n\\n**Novelty, and comparison to approximate equivariance:**\\n\\n- Our method is a general training procedure that could be applied to any unconstrained models, without specific assumption of the design (eg GNN/ CNN/ MLP or Transformers) while most of the work on approximate equivariance focus on relaxing equivariant architectures or regularizing specific layers within such architectures. We have added a discussion on this in the Related Work section with red colors in the updated manuscript. \\n\\n- The method defined in [1] is a regularization for the linear layers of Equivariant MLP [2], and [3] regularized the CNN architecture. However, in our work, we didn't assume a specific class of models, which increases the applicability of our method to various domains. We have clarified this in the updated version.\\n\\n**Empirical evaluation:**\\n\\n- We have added a new comparison on Motion Capture task: We compare our method against Projection-Based Equivariance Regularizer (PER) [1], equivariant MLP (EMLP) [2], and Residual Pathway Priors (RPP) [4]. As these architectures are designed based on linear layers and MLP, we apply the augmented loss to standard MLP with a similar number of layers and parameters. \\n\\n- We have included a new task on the Jet Flow dataset used by [5] (a two-dimensional benchmark that captures turbulent velocity fields): We apply our method to a Convolutional neural network (CNN) and compare it with Relaxed Steerable Convolution (RSteer) [5] and E2CNN [6] (more details in Appendix A in the paper).\\n\\n- Our new results confirm the applicability of our method to different architectures, including MLPs, CNNs, in addition to GNNs, and Transformers, across a wide range of benchmarks.\\n\\n1. Regularizing Towards Soft Equivariance Under Mixed Symmetries. Kim et al., ICML 2023.\\n\\n2. A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups. Finzi et al., 2021.\\n\\n3. A simple rotational equivariance loss for generic convolutional segmentation networks: preliminary results.\\n\\n4. Residual Pathway Priors for Soft Equivariance Constraints. Finzi et al., NeurIPS 2021.\\n\\n5. Approximately Equivariant Networks for Imperfectly Symmetric Dynamics. Wang et al., ICML 2022.\\n\\n6. General E(2)-Equivariant Steerable CNNs. Weiler et al., NeurIPS 2019.\"}", "{\"title\": \"Follow Up\", \"comment\": \"Dear reviewer,\\n\\nThank you once again for your time and the valuable feedback you have provided. We did our best to answer your questions and follow your comments in the revised version.\\nAs the discussion period is ending soon, we kindly ask if the reviewer\\u00a0have\\u00a0made a decision on whether to raise their score, as described in their latest response, or if they have any further questions.\"}", "{\"comment\": \"We thank the reviewer for their time and all the comments, which have helped us improve our paper and indicate our contribution. We are also open to answer any further questions.\\n\\n**Limited group transformations:**\\n\\n- We would like to highlight to the reviewer that our experiments primarily focus on point clouds and molecules across three key tasks: 3D Dynamical Systems, Motion Capture, and Molecular Dynamics. While we acknowledge that fully equivariant tasks, such as rotation, can be computationally intensive when sampling across all angles, our empirical results show that applying a single random rotation during training is sufficient to achieve performance comparable to equivariant baselines.\\n\\n- We also conduct additional experiments with different numbers of samples from the symmetry group during training comparing our method and data augmentation. Our new results confirm that we can achieve reasonable performance using fewer samples from the symmetry group. We included the results in Appendix C.2 in the updated version.\\n\\n**Symmetry group:**\\n\\n- We thank the reviewer for highlighting the need to specify the equivariant tasks our method focuses on. In this work, we consider SE(3) symmetry group (i.e. group of rotations and translations). We have clarified this in the Introduction section of the updated manuscript.\\n\\n\\n**Relaxed equivariance and unconstrained models** \\n\\n- We have updated our manuscript to consider the relevant works on relaxed equivariance in the Related Work section. \\n\\n- We have added new results on the Motion Capture task. We compare our method against Residual Pathway Priors (RPP) [1], Projection-Based Equivariance Regularizer (PER) [2], and Equivariant MLP (EMLP) [3]. As these architectures are designed based on linear layers and MLP, we apply the augmented loss to standard MLP with a similar number of layers and parameters.\\n\\n- We also include a new task on the Jet Flow dataset used by [4] (a two-dimensional benchmark that captures turbulent velocity fields): We apply our method to a Convolutional neural network (CNN) and compare it with Relaxed Steerable Convolution (RSteer) [4] and E2CNN [5] (more details in Appendix A in the paper).\\n\\n- We clarified in the updated version that the high computational cost comes particularly with those relying on spherical harmonics and irreducible representations (Introduction Section).\\n\\n- We thank the reviewer for the point on the limited expressive power and included it in the updated version. We also added some examples of tasks that exhibit approximate equivariance (Introduction Section).\\n\\n\\n1. Residual Pathway Priors for Soft Equivariance Constraints. Finzi et al., NeurIPS 2021.\\n\\n2. Regularizing Towards Soft Equivariance Under Mixed Symmetries. Kim et al., ICML 2023.\\n\\n3. A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups. Finzi et al., 2021.\\n\\n4. Approximately Equivariant Networks for Imperfectly Symmetric Dynamics. Wang et al., ICML 2022.\\n\\n5. General E(2)-Equivariant Steerable CNNs. Weiler et al., NeurIPS 2019.\"}", "{\"comment\": \"Thank you for your patient responses; your responses have indeed addressed some of my concerns, especially on the experimental results of MD17. I am willing to raise my score to 5.\\n\\nI have briefly reviewed the molecular dynamics articles you provided, and I apologize for my lack of professionalism in this field. In fact, my understanding of MD17 has always been focused on force and energy prediction, which is not comprehensive. I am willing to lower my confidence to 4.\\n\\nFinally, the rebuttal period is coming to an end, and I won't have the chance to hear back from you, but I will take the time to read the references you've listed and continue to consider whether to change the score.\"}", "{\"title\": \"Follow Up\", \"comment\": \"Dear reviewer,\\n\\nThank you once again for your time and the valuable feedback you have provided. We did our best to answer your questions and follow your comments in the revised version.\\nAs the discussion period is ending soon, we kindly ask if the reviewer can consider updating their score, or if they have any further questions.\"}", "{\"metareview\": \"This paper introduces a framework for learning approximate symmetries from data by starting with a neural network that is not inherently equivariant and training it with an equivariance regularizer. The authors made commendable efforts to address several concerns raised by the reviewers, such as the lack of comparisons with key existing works and the need to demonstrate the framework's effectiveness on large-scale, complex systems. However, despite these improvements, the reviewers remain unenthusiastic about the paper. Their main reservations lie in its limited novelty (similar approaches, such as Kim et al., 2023, have already been explored, albeit with a focus on linear layers) and its restricted applicability (e.g., requiring multiple training runs with varying $\\\\alpha$ and $\\\\beta$ values). The AC encourages the authors to further refine this research direction and address the outstanding issues regarding the framework's applicability and practicality.\", \"additional_comments_on_reviewer_discussion\": \"The initial reviews were quite bad, and the authors resolved some of the concerns by clarifying misunderstandings and adding more empirical results. Still, only one reviewer raised the score above the acceptance bar (to weak accept), since the fundamental weakness (limited novelty and applicability) has not been fully addressed. None of the reviewers were willing to champion the paper during the final discussion phase.\"}", "{\"title\": \"Follow Up\", \"comment\": \"Dear reviewer,\\n\\nThank you once again for your time and the valuable feedback you have provided. We did our best to answer your questions and follow your comments in the revised version.\\nAs the discussion period is ending soon, we kindly ask if the reviewer can consider updating their score, or if they have any further questions.\"}", "{\"summary\": \"The paper attempts to build equivariance into unconstrained models by posing equivariance as a constrained optimization problem, which can, in turn, also control the level of approximate equivariance in the models. The authors demonstrate results in N-body dynamical systems, motion capture, and molecule dynamics, and they analyze the effect of the level of approximate equivariance on task performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-motivated and clearly written (particularly the sections on background and methods).\", \"The limitation section discusses an important limitation of the interplay between optimization paths and loss landscape.\", \"The experiments are conducted in different domains and examine several essential aspects of the algorithm, giving more insights into the method and how levels of equivariance can affect downstream task performance.\"], \"weaknesses\": \"- **Related work**:\\n - Although the paper uses equivariance as a constrained optimization problem and discusses it in the context of unconstrained models, it misses several crucial relevant works. Discussion of these works would help to place the submission in the literature and give a view of how this work differs from and compares to existing works.\\n - Learning equivariance from data [1, 2], approximate/soft equivariance [3, 4, 5, 6], equivariance as a constrained optimization problem [7, 8], and equivariance with unconstrained models [9, 10, 11, 12].\\n - Can the authors highlight the differences from Sec 3.1 and Sec. 3.2 of [10]?\\n\\n- **$\\\\beta$ and $\\\\alpha$ as hyperparameters**:\\n - The authors suggest that the level or extent of equivariance can be controlled with $\\\\beta$ and $\\\\alpha$ - is there a formal way to define this \\\"level\\\" of equivariance or is it an intuition tied to the loss itself, i.e., higher $\\\\frac{\\\\beta}{\\\\alpha}$ indicates more equivariant? \\n - Next, how would someone know the optimal level of equivariance while using your proposed algorithm - $\\\\beta$ is not learned, and the results indicate that the optimal $\\\\beta$ can be identified from the test data results, which is not ideal. Rephrasing this, how do you know how much equivariance is required for the task, and thus what values of $\\\\alpha$ and $\\\\beta$ to set?\\n\\n\\n- **Methodology**:\\n - How will your algorithm work if group $G$ is unknown?\\n - How can your method reasonably approximate equivariance if $G$ is very large and the duration of training is not enough?\\n - The highest level of equivariance is when $\\\\alpha = 0$ and $\\\\beta=1$. However, this is equivalent to data augmentation, which does not guarantee exact equivariance. Can your algorithm guarantee exact equivariance?\\n - While the trends are consistent for both metrics, as reported in the paper, it might be helpful to discuss which metric - Eq. 9 or Eq. 10 is better suited for evaluation. How does Equation 9 work (or make sense) when $f(x)$ is non-scalar?\\n - For Motion Capture, if the symmetry constraints are already known, instead of complete SE(3) equivariant baselines, why didn't the authors select appropriate equivariant models that are equivariant to the required SE(3) subgroup or consider symmetry breaking [14, 15]? What $G$ did your algorithm use? If it is the subgroup of SE(3), then it is an unfair comparison.\\n\\n- **Minor spelling errors**:\\n - L156 \\\"requiring equivariant into\\\" should be \\\"requiring equivariance in\\\"\\n - L396 \\\"it is\\\" should be \\\"its\\\"\\n\\n\\n**References**:\\n1. Equivariance Discovery by Learned Parameter-Sharing. Yeh et al., AISTATS 2022.\\n2. Learning Equivariances and Partial Equivariances from Data. Romero et al., NeurIPS 2022.\\n3. Learning Layer-wise Equivariances Automatically using Gradients. Ouderaa et al., 2023.\\n4. Residual Pathway Priors for Soft Equivariance Constraints. Finzi et al., NeurIPS 2021.\\n5. Almost Equivariance via Lie Algebra Convolutions. McNeela et al., 2024.\\n6. Regularizing Towards Soft Equivariance Under Mixed Symmetries. Kim et al., ICML 2023.\\n7. Improved Canonicalization for Model Agnostic Equivariance. Panigrahi et al., 2024.\\n8. Structuring Representations Using Group Invariants. Shakerinava et al., NeurIPS 2022.\\n9. Equivariance with Learned Canonicalization Functions. Kaba et al., ICML 2023.\\n10. Equivariant adaptation of large pretrained models. Mondal et al., NeurIPS 2023.\\n11. Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models. Basu et al., AAAI 2023.\\n12. Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance. Kim et al., NeurIPS 2023.\\n13. Steerable Equivariant Representation Learning. Bhardwaj et al., 2023\\n14. Symmetry breaking and equivariant neural networks. Kaba et al., NeurIPS NeuReps workshop 2023\\n15. Improving Equivariant Networks with Probabilistic Symmetry Breaking. Lawrence et al., ICML GRaM workshop 2024.\", \"questions\": [\"How will your algorithm fare when there are data constraints? Equivariant models are inherently data efficient, but your algorithm does not seem to be.\", \"The loss landscape plots depend on the selected directions - so how can we infer from just two random directions that the loss landscape is better for Transformers or GATr? The optimization paths should affect, and although it is discussed to some extent in limitations, it would be better if there is more discussion on this.\", \"Most of the other questions I had are listed in the Weaknesses section. I will be happy to improve the score if the authors address the questions and weaknesses with supportive evidence during the discussion phase.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors analyze the application of unconstrained models in equivariant tasks by conducting a comprehensive analysis of unconstrained models, comparing their performance and computational efficiency against equivariant models. Besided, the authors introduce a novel, simple loss function that enables these models to approximate symmetries, which can be optimized during training.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"Relaxing equivariance is a valuable research direction that can break through the constraints on generalization or expressive power caused by strictly equivariant operations.\", \"weaknesses\": \"1. The primary concern is the authors' motivation. The idea of using group transformations for data augmentation is native, but for many equivariant tasks, it is challenging to obtain a general model through data sampling due to the bias introduced by limited sampling. For instance, for point clouds or molecules, sampling across all angles would expand the dataset by hundreds of times and still struggle to enable the model to effectively learn fine-grained rotation equivariance. I suggest the authors validate their approach on common 3D datasets such as QM9 or ModelNet40.\\n\\n2. The authors base their introduction in the first three sections on general equivariance, yet the impact of different equivariance groups on algorithms varies. For example, permutation equivariance and translation equivariance can be directly covered by simple operations, making the paper's method inapplicable. The authors should specify which equivariant tasks their method focus on.\\n\\n3. Relaxing equivariance is a widely discussed topic, and the authors lack relevant citations and analysis [1] [2] [3] [4]. Moreover, the main advantage of unconstrained models is their ability to learn more complex features. It is worth noting that strictly equivariant operations can limit the expressive power of GNNs [5] [6], but unconstrained models may surpass these limitations. Additionally, some tasks are not strictly equivariant, allowing unconstrained models to be applicable. The authors' emphasis on the lower computational complexity of unconstrained operations is incorrect. In the 3D domain, models like torchmd are strictly equivariant yet have low complexity.\\n\\n [1] Residual pathway priors for soft equivariance constraints, Finzi, et al.\\n\\n [2] Approximately equivariant networks for imperfectly symmetric dynamics. Wang, et al.\\n\\n [3] Relaxing equivariance constraints with non-stationary continuous filters. van der Ouderaa, et al.\\n\\n [4] Learning Partial Equivariances from Data. Romero, et al.\\n\\n [5] On the Universality of Rotation Equivariant Point Cloud Networks. Nadav Dym, Haggai Maron. \\n\\n [6] On the Expressive Power of Geometric Graph Neural Networks. Chaitanya K. Joshi, Cristian Bodnar, Simon V. Mathis, Taco Cohen, Pietro Li\\u00f2.\\n\\n4. I do not understand how the loss surface in Figure 1 was created and why it demonstrates the advantages of unconstrained models.\\n\\n5. There are numerous issues with the paper's presentation:\\n\\n (a) The equations in lines 218, 224, and 227 lack numbering.\\n\\n (b) In line 215, the definition of G is finite, which is problematic for integrals where the group size can be infinite. Most groups mentioned in the paper are infinite, and I do not understand why the authors restrict groups to being finite in their initial definition.\\n\\n (c) All the references have formatting issues because none of them specify the source of the papers. For instance, \\\"Equivariant Graph Hierarchy-Based Neural Networks\\\" in your paper is accepted in NeurIPS 2022, not arxiv.\\n\\n (d) Appendix B is incomplete; several titles are clustered together without any explanatory text.\", \"questions\": \"See weekness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We thank the reviewer for his time and feedback and for engaging with us in the discussion.\", \"**''I do not understand what the authors meant by \\\"Nonetheless, these directions cannot seamlessly leverage unconstrained architectures that do not bake symmetries into their design by simply altering the training protocol.\\\" since most of the papers attempt to leverage the unconstrained architectures (even pretrained ones) and converting them to appropriate symmetries.''** We apologize for our ambiguous statement in the previous version of the paper. Our intention was to highlight that, despite methods like canonicalization and frame-averaging utilizing unconstrained architectures, these methods often require additional network or averaging techniques to achieve equivariance and may not rely solely on adjustments to the training protocol (e.g., loss function or optimization strategy) of unconstrained models. We have clarified this in an updated version of our manuscript.\", \"**Comparison with [1]:** The equivariance-promoting regularizer defined in [1] injects equivariance in the latent space of the network by explicitly modeling transformations with an additional map, as it is not obvious how to apply the transformation group to latent vectors. The authors introduced a learnable map $M_{a}$ that is applied to the output of the encoder, which is a latent vector. The learnable map depends on the augmentation parameters, and the authors mentioned that for each augmentation (e.g. single rotation) they need a new map, which might be computationally problematic when doing a large number of augmentations. In our method, we learn the equivariance loss directly using the true label $y$ and apply the group action to it (e.g. for predicting the 3D positions of a system given the initial state, we rotate $y$ by the same rotation matrix applied to the input $x$). Furthermore, we didn't have any additional parameters for the equivariance loss, and it is directly utilized in the optimization of the unconstrained model, making it computationally efficient. We have indicated the difference in the updated version.\", \"**Non-scalar function:** We apologize for missing this point. We have updated the paper and indicated that $\\\\| \\\\| \\\\cdot \\\\| \\\\|_2$ is an $L_2$ norm, making it valid for non-scalar functions. The operator $\\\\rho(g_i)$ represents the action of the group element $g_i$ on the output space of $f$. For example in vector-valued outputs, $\\\\rho(g_i)$ is a linear operator (e.g., a rotation matrix) acting on the vector $f(x)$. Please note that equation 9 in the original submission is now equation 12 in the revised version of the manuscript.\", \"**Incorrect citation:** We thank the reviewer for this point and apologize for that. We have updated the paper with the corrected explanation.\", \"1. Steerable Equivariant Representation Learning. Bhardwaj et al., 2023\"]}", "{\"comment\": \"- **\\\"Furthermore, if the authors want to prove that your loss helps reduce the computational cost of equivariant models, you should list some detail results of computational metrics (In Figure 5, we only observe that your method outperforms GATr)\\\"** We thank the reviewer for this point. In Figure 5, we report the wall-clock time for the Geometric Algebra Transformer (GATr) and\\nTransformer architectures. We selected models with an equivalent number of blocks and parameters for a fair comparison (please see Appendix B for details).\\nWe measured the computational efficiency of each model by recording the time taken for both forward and backward passes during training, as well as inference time. We show that Transformer can achieve up to $10$ times\\nfaster inference speed and $2.5$ times faster training speed compared to GATr.\\nHowever, although we have included the GATr architecture in our discussion, GATr itself is faster than many equivariant architectures, such as SEGNN and SE(3)-Transformer (please see [14]).\\n\\n\\n- **\\\"Adding a random rotation to the loss is a natural idea, and perhaps the authors can look into whether there are similar works that have tried this approach.\\\"** We thank the reviewer for this point. However, we already included a related work section with a detailed discussion of the existing literature on approximate equivariance, with all the differences between these approaches and our proposed method.\\nIf there are any further questions or specific aspects that we have not addressed, or the reviewer would like to point out, we are happy to provide additional clarifications.\\n\\n- **\\\"The authors could also design algorithms from the perspective of reducing the training cost of equivariant models. Perhaps modifying the loss could solve the bottleneck in equivariance. I encourage the authors to delve deeper into their research.\\\"** We thank the reviewer for their suggestions. However, we believe that both directions are valuable and have a potential impact. In this work, we focus on comparing existing equivariant models with their unconstrained counterparts across a diverse set of benchmarks. Specifically, we evaluate Transformers, Graph Neural Networks (GNNs), Convolutional Neural Networks (CNNs), and Multi-Layer Perceptrons (MLPs) on four distinct tasks: Dynamical Systems, Motion Capture, Molecular Dynamics, and Jet Flow benchmarks. We aim to provide valuable insights into the performance, scalability, and applicability of unconstrained vs equivariant models across various domains.\\n\\n1. SchNet: A continuous-filter convolutional neural network for modeling quantum interactions. Sch\\u00fctt et al., NeurIPS 2017. \\n\\n2. Rotation Invariant Graph Neural Networks using Spin Convolutions. Shuaibi1 et al., 2021. \\n\\n3. Spherical Message Passing for 3D Graph Networks. Liu et al., ICLR 2022. \\n\\n4. Symmetry-Informed Geometric Representation for Molecules, Proteins, and Crystalline Materials. Liu et al., NeurIPS 2023.\\n\\n5. Equivariant graph mechanics networks with constraints. Huang et al., ICLR 2022.\\n\\n6. EqMotion: Equivariant Multi-Agent Motion Prediction With Invariant Interaction Reasoning. Xu et al., CVPR 2023.\\n\\n7. Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics. Wu et al., NeurIPS 2023.\\n\\n8. Equivariant Graph Neural Operator for Modeling 3D Dynamics. Xu et al., ICML 2024. \\n\\n9. Equivariant Graph Hierarchy-Based Neural Networks. Han et al., NeurIPS 20222.\\n\\n10. Clifford Group Equivariant Simplicial Message Passing Networks. Liu et al., ICLR 2024.\\n\\n11. Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs Liao et al., ICLR 2023. \\n\\n12. EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations Liao et al., ICLR 2024.\\n\\n13. Enabling Efficient Equivariant Operations in the Fourier Basis via Gaunt Tensor Products. Luo et al., ICLR 2024.\\n\\n14. Geometric Algebra Transformer. Brehmer et al., NeurIPS 2023.\"}", "{\"comment\": [\"We thank the reviewer for their time and for engaging with us in the discussion. We appreciate the reviewer\\u2019s acknowledgment\", \"that we have taken their suggestions and the improvement of the paper.\", \"**\\\"We often consider equivariance when focus on some complex systems. In this case, adding a random rotation to the loss once is not enough\\\".** As we pointed out before, we conduct additional experiments with different numbers of samples from the $SO(3)$ group during training comparing our method and data augmentation. In practice, our new results confirm that we can achieve reasonable performance using fewer samples from the symmetry group. Please see the results in Appendix C.2 in the updated version.\", \"**\\\"The model might be able to witness all possible SO(3) transformations by increasing the number of epochs, but this would also greatly increase the\", \"training cost.\\\"** We would like to point out to the reviewer that we trained all the models, unconstrained and equivariant models, in all the experiments and benchmarks using the same number of epochs. Please see the details in Appendix B.\", \"**\\\"I recommend that the authors test their algorithms on some complex systems, such as QM9, GEOM-Drug, OC20, which are more concerned 3D systems in the field of computational chemistry.\\\"** We want to clarify to the reviewer that both Motion Capture and Molecular Dynamics (MD17) are widely used 3D benchmarks in the literature. Please see for example [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].\", \"**\\\"Equivariant models are not actually expensive. For example, the TorchMD-Net model only uses low-degree equivariance, and it can achieve good results on both QM9 and MD17\\\".** We thank the reviewer for pointing out the example of TorchMD-Net. However, there is no contradiction between that and what we pointed out in our paper. Methods that use spherical harmonics and higher-order tensor products (which are more expressive) are computationally expensive. Using a lower degree could reduce the computational cost but it comes at the trade-off of performance. Please see for example [11, 12, 13].\", \"**\\\"Some equivariant models are expensive since the tensor product on higher-degree spherical harmonic representations is very time-consuming\\\".** We appropriate the reviewer's acknowledgment that 'tensor product on higher-degree spherical harmonic representations is very time-consuming'. However, this is what we indicated in our paper. Please see the introduction section.\", \"**\\\"The comparative experiments in the paper still contain irrationalities: MD17 includes both energy and forces properties data, but I am not sure which one the authors are comparing in Table 2 (if the authors have stated this, I apologize). Why not compare both energy and forces?\\\"** We would like to clarify to the reviewer that MD17 benchmark has two common tasks in the literature:\", \"Invariant Task: Predicting energies given molecular states/ positions (Please see for eg [1, 2, 3, 4]).\", \"Equivariant Task: Predicting molecular states/ positions after a specific number of time steps given initial states/ positions (Please see for eg [5, 6, 7, 8]).\", \"In this work, we focus on the equivariant task, following the same previous work on this task. Our primary objective is to compare unconstrained models with their corresponding equivariant versions (GNN vs EGNN).\", \"**\\\"Additionally, the comparison in this table is also unreasonable: why is the error for EGNN so large (MSE for Benzene is 62.40)? The MD17 dataset is a simple molecular dataset, and it should not perform so poorly. You can look at models like TorchMD-Net, Equiformer\\\".** We would like to point out to the reviewer that we follow the literature on the equivariant task where the models they suggested focus on the invariant task.\", \"However, we are not the first to report the results for EGNN on this task and we follow the same setup in the literature. Please see for example [5, 6, 7, 8]. Additionally, all the training details are reported in Appendix B.3 and we also plan to publish the full code if it has been accepted.\", \"In our work, we observe that the optimal performance of each molecule is attained at different values of the\", \"penalty parameter $\\\\beta$. For instance, Malonaldehyde exhibits a direct correlation between model performance and equivariance, where a higher penalty parameter $\\\\beta$ yields better performance. For this, EGNN achieved good performance. Conversely, for some molecules, there is a trade-off where the best performance is achieved\", \"at a lower value of $\\\\beta$. Please see Section 6.3 in the paper.\"]}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your answers and patience in clarifying my doubts. I believe the paper has improved and connects better with the existing literature. I have increased my scores to recommend acceptance. However, I do share similar concerns with other reviewers, particularly the limited novelty and applicability. I am a bit concerned that we would have to train different models with multiple combinations of $\\\\alpha$ and $\\\\beta$ for a particular task. It would be interesting to see more insights from the loss landscape and understand if this method can be applied during fine-tuning. For instance, you have a pretrained model, and you want to use it for a task which needs some extent of equivariance - can this method leverage the zero-shot loss landscape of the pretrained model?\"}", "{\"comment\": [\"**Questions:**\", \"**Approximate equivariance:** We have included in the updated version new results comparing to the approximate equivariance baselines.\", \"**Equivariance measure and augmented loss:** There appears to be some misunderstanding between the equivariance measure and the augmented loss. For the augmented loss, we use a single random rotation during training, this is indicated in Section 3.1 where we defined the equivariance loss and the penalty parameters. Empirically, this is enough to achieve comparable performance with the equivariance baselines. Where the equivariance measure introduced in Section 4 quantify the degree of equivariance exhibited by a function $f$. We use this measure to compare different models and baselines in the experiments and how increasing the weights on the equivariance loss could reduce the equivariance error. For the equivariance measure, we use $M=100$ samples from the group and noticed this was sufficient to obtain stable results. We clarified all the points in the updated version.\", \"**Justification for the choice of one group element per sample:** We conduct additional experiments with different number of samples from the symmetry group during training comparing our method and data augmentation. Our new results confirm that we can acheive reasanble performance using fewer samples from the symmetry group. We included the results in Appendix C.2 in the updated version.\", \"**The equivariance measure** used in Section 6 is defined in Equation 13. We explained this at the beginning of Section 6 in the updated version.\", \"**Loss landscape:** We would like to emphasize that unconstrained models may exhibit a more convex or smoother structure around their local minima compared to equivariant models. This observation could serve as additional evidence of the optimization challenges faced by equivariant networks. However, we acknowledge certain limitations in this analysis, such as not accounting for the trajectories each model follows to reach their respective minima. We have clarified this point in the Limitations section and plan to explore it further in future work.\", \"**MD17** The task in MD17 is predicting the 3D trajectories of molecules, similar to [1, 2]. We have provided a detailed explanation of this in Section 6.3.\", \"**The statement** \\\"Best performance is observed at an intermediate level of equivariance...\\\": This statement refers to the observation that adjusting the weighting parameters of the augmented loss allows us to control the level of approximate equivariance learned by the model (using the quantified measure as in Figures 2, 3, and 4 in the paper). We noticed there is a trade-off between equivariance and performance, where the best performance could be achieved with a lower weight in the equivariance loss. We have clarified this in the updated version.\", \"1. Equivariant graph mechanics networks with constraints. Huang et al., 2022.\", \"2. Equivariant Graph Neural Operator for Modeling 3D Dynamics\", \"Xu et al., ICML 2024.\", \"We sincerely hope that we have addressed the concerns of the reviewer satisfactorily in the revised version and would kindly ask the reviewer to update their score accordingly.\"]}", "{\"title\": \"Response to Authors\", \"comment\": \"I thank the authors for their response and the revisions which have improved the paper. I have updated my score accordingly.\\n\\nI find that the paper still has significant weaknesses. The MD17 task presented is still unclear, and the results are from a procedure that limits the range of baselines. There are MD17 baselines that consider significantly broader range of baselines with better results than those reported. Furthermore, I find the significance of the contribution to be low due to previous studies of the loss function.\"}", "{\"comment\": \"We thank the reviewer for their time and feedback, and for engaging with us in the discussion.\\nWe appreciate the reviewer\\u2019s acknowledgment of the revisions made and the improvement of the paper. \\n\\nIn this work, we undertake a comprehensive comparison between existing equivariant models and their unconstrained counterparts across a diverse set of benchmarks. Specifically, we evaluate Transformers, Graph Neural Networks (GNNs), Convolutional Neural Networks (CNNs), and Multi-Layer Perceptrons (MLPs) on four distinct tasks: Dynamical Systems, Motion Capture, Molecular Dynamics, and Jet Flow benchmarks. We aim to provide valuable insights into the performance, scalability, and applicability of unconstrained vs equivariant models across various domains.\\n\\nHowever, we acknowledge that numerous additional ideas for extending our study offer exciting opportunities for future research.\\nFor example, as we indicated before, $\\\\alpha$ and $\\\\beta$ serve as additional hyperparameters in the training procedure.\\nFuture directions could explore the utilization of efficient learnable weights, such as [1], or recent approaches that use gradient projection as suggested in [2]. \\nAdditionally, investigating the application of our method during the fine-tuning phase, such as leveraging pretrained models for tasks requiring equivariance, is an exciting prospect. For example, integrating our framework with denoising objectives [3, 4] could enhance its applicability and performance in scenarios where pretrained models are adapted to new tasks.\\nWe believe that this direction could significantly impact the field by enabling broader applicability and easier integration into existing frameworks.\\n\\n\\n1. SLAW: Scaled Loss Approximate Weighting for Efficient Multi-Task Learning. Crawshaw et al, 2021. \\n\\n2. Task Weighting through Gradient Projection for Multitask Learning. Bohn et al., 2024. \\n\\n3. Pre-training via Denoising for Molecular Property Prediction. Zaidi et al., ICLR 2023.\\n\\n\\n4. Pre-training with fractional denoising to enhance molecular property prediction. Ni et al., Nat Mach Intell 2024.\"}", "{\"title\": \"Official Comment by Reviewer JQbk\", \"comment\": \"Thank you for your response. I am pleased that you have carefully read my questions and taken my suggestions. Now, it seems that the paper has indeed improved. However, I still cannot change my score to a positive opinion for several reasons:\\n\\n1. We often consider equivariance when focus on some complex systems. In this case, adding a random rotation to the loss once is not enough. The model might be able to witness all possible SO(3) transformations by increasing the number of epochs, but this would also greatly increase the training cost. I recommend that the authors test their algorithms on some complex systems, such as QM9, GEOM-Drug, OC20, which are more concerned 3D systems in the field of computational chemistry.\\n\\n2. Equivariant models are not actually expensive. For example, the TorchMD-Net model only uses low-degree equivariance $l=1$, and it can achieve good results on both QM9 and MD17. Some equivariant models are expensive since the tensor product on higher-degree spherical harmonic representations is very time-consuming.\\n\\n3. The comparative experiments in the paper still contain irrationalities: MD17 includes both energy and forces properties data, but I am not sure which one the authors are comparing in Table 2 (if the authors have stated this, I apologize). Why not compare both energy and forces? Additionally, the comparison in this table is also unreasonable: why is the error for EGNN so large (MSE for Benzene is 62.40)? The MD17 dataset is a simple molecular dataset, and it should not perform so poorly. You can look at models like TorchMD-Net, Equiformer, etc., where the results on MD17 are almost saturated. Furthermore, if the authors want to prove that your loss helps reduce the computational cost of equivariant models, you should list some detail results of computational metrics (In Figure 5, we only observe that your method outperforms GATr).\\n\\nIn conclusion, I am sorry that I cannot change my score because I believe this paper does not address the key issues of 3D equivariance (performance and computational cost). Adding a random rotation to the loss is a natural idea, and perhaps the authors can look into whether there are similar works that have tried this approach. However, I acknowledge the author's research direction. I believe that approximate or learnable equivariance is valuable because strict equivariance can lead to a loss of expressive power, and unconstrained neural networks are more likely to learn some complex 3D features. The authors could also design algorithms from the perspective of reducing the training cost of equivariant models. Perhaps modifying the loss could solve the bottleneck in equivariance. I encourage the authors to delve deeper into their research.\"}", "{\"comment\": \"Dear reviewers,\\n\\nIf you haven\\u2019t done so already, please engage in the discussion as soon as possible. Specifically, please acknowledge that you have thoroughly reviewed the authors' rebuttal and indicate whether your concerns have been adequately addressed. Your input during this critical phase is essential\\u2014not only for the authors but also for your fellow reviewers and the Area Chair\\u2014to ensure a fair evaluation.\\nBest wishes,\\nAC\"}", "{\"comment\": \"We thank the reviewer for their time and all the comments, which have helped us improve our paper and indicate our contribution. We are also open to answer any further questions.\\n\\n**Related Work:**\\n\\n- We thank the reviewer for these points. We have added a new section discussing learning symmetries' approximate equivariance and included comparisons in the related work section.\\n\\n- We have included all the citations and comparisons pointed out by the reviewer. \\n\\n- The method in [1] used a canonicalization approach (i.e. additional trainable canonicalization network) to orient the input before the main network. For stable canonicalization, the authors used prior distribution over the symmetry group. However, in our work, we didn't use any additional canonicalization network, and we directly learned equivariance through the unconstrained model.\\n\\n1. Equivariant adaptation of large pretrained models. Mondal et al., NeurIPS 2023.\\n\\n**$\\\\alpha$ and $\\\\beta$ as hyperparameters:**\\n\\n- We agreed with the reviewer that higher $\\\\frac{\\\\beta}{\\\\alpha}$ leads to more equivariant function. However, this will not indicate if the function can approximate equivariance or not, or to what extent, as this will be specific to the parameters $\\\\alpha$ and $\\\\beta$. In other words, we can define the relative equivariance as a fraction of $\\\\beta$ and $\\\\alpha$, but we also want to measure the level of approximate equivariance the model learns, for this we introduced the equivariance metric. \\n\\n- $\\\\alpha$ and $\\\\beta$ will be treated as additional parameters that are determined from the validation set. Similar to other parameters in the training procedure (eg learning rate, batch size, number of layers in architecture, etc). \\n\\n**Methodology:**\\n\\n**Symmetry group:** In this work we consider SE(3) symmetry group (i.e. the group of rotations and translations) and SE(3) equivariant architectures. We have clarified this in the Introduction section of the updated manuscript. We are happy to extend this in future for unknown group (eg Lie algebra convolutional by [1]).\\n\\n**if $G$ is very large**. As we focus on SE(3) symmetry group, this already considered a very large/ infinite group. We noticed that empirically our algorithm is sufficient to achieve competitive performance with equivariant baselines using very small number of samples. For example, in the N-Body dynamical system, we train all the models using $100$ samples.\\n\\n**The highest level of equivariance is when $\\\\alpha = 0$ and $\\\\beta = 1$**. It is important to note that $\\\\beta$ is not limited to $1$. In our formulation, $\\\\beta$ can take any positive real value. By increasing $\\\\beta$ we can place greater emphasis on enforcing equivariance in the model and reduce the equivariance error (Figure 2 in the paper). We observed that this approach is empirically sufficient to achieve competitive performance with advanced equivariant architectures, such as the Geometric Algebra Transformer, when applied to fully equivariant tasks like the N-body dynamical system.\\n\\n**Which metric is better suited for evaluation:** We think both metrics are applicable following our observation in the paper, even for nonscalar functions (eg predicting trajectories of the dynamical system). However, the first measure introduced in Equation 12 theoritaclly could have more stability due to the average over the function $f$.\\n\\n**Motion Capture task:** For our algortihm, we sample from the SE(3) symmetry group in all the exeriments and not the subgroup for a fair comparison against the equivariant baselines. Furthermore, we have added new results on the Motion Capture task with a comparison to approximate equivariance baselines. We compare our method against Residual Pathway Priors (RPP) [2], Projection-Based Equivariance Regularizer (PER) [3], and equivariant MLP (EMLP) [4]. As these architectures are designed based on linear layers and MLP, we apply the augmented loss to standard MLP with a similar number of layers and parameters. Our new results confirm the applicability of our method to different architectures, including MLPs, CNNs, GNNs, and Transformers, across a wide range of benchmarks.\\n\\n**Minor errors:** We thank the reviewer for these points, we have corrected the points in the updated version. \\n\\n1. Automatic Symmetry Discovery with Lie Algebra Convolutional Network Dehmamy et al., NeurIPS 2021\\n\\n2. Residual Pathway Priors for Soft Equivariance Constraints. Finzi et al., NeurIPS 2021.\\n\\n3. Regularizing Towards Soft Equivariance Under Mixed Symmetries. Kim et al., ICML 2023.\\n\\n4. A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups. Finzi et al., 2021.\"}", "{\"summary\": \"The paper investigates unconstrained models for handling data symmetry. The authors demonstrate that by designing a loss function specifically tailored for learning equivariance, unconstrained models can approximate data symmetry by minimizing this equivariant loss. This approach allows the models to efficiently control the level of equivariance while maintaining flexibility.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and presents the core ideas in a clear and accessible manner.\", \"Using a \\\"landscape\\\" to describe the benefits of unconstrained models is particularly novel and insightful.\"], \"weaknesses\": [\"The study introduces this equivariant loss without providing a strong theoretical foundation for the proposed approach. Also, the proposed method is quite straightforward, and its distinction from data augmentation is unclear. It essentially computes the loss on a larger augmented dataset by sampling transformed data. I suspect this method is already well-known within the community, which limits the novelty of the contribution.\", \"The experimental comparisons are performed on a limited set of classic models rather than state-of-the-art models, raising concerns about the practical applicability of the method to more advanced techniques.\"], \"questions\": \"Please address my concerns in the weakness part, especially the novelty of the proposed method and the theoretical foundations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thank you for addressing my concerns regarding the novelty of the proposed methods and for incorporating new content based on all reviewers' feedback. The additions in the revision are helpful and clarify the paper's contributions. I have raised my scores accordingly.\\n\\nHowever, the overall idea and methods, while interesting, are relatively straightforward, which limits the paper's contribution. The concept of the landscape is quite interesting, and I believe that further theoretical investigation and empirical studies in this direction could significantly strengthen the paper and elevate its impact.\"}", "{\"comment\": \"**Loss surface:**\\n\\nDue to the high dimensionality of parameter spaces in neural networks, visualizing the\\nloss functions in three dimensions might be a significant challenge. Some works studied this through 1D or 2D interpolations [1, 2]. Among these, we use the filter normalization method [3], which calculates the loss function along two randomly selected Gaussian directions in the parameters space, starting from the optimal parameters achieved at the end of training (local minima).\\nHaving a flat or smooth surface around the minima is associated with better generalization and easier training [3, 4, 5]. However, we agree with the reviewer this still requires further study, and we plan to explore this direction in future work.\\n\\n1. Qualitatively characterizing neural network optimization problems. ICLR, 2015. \\n\\n2. An empirical analysis of the optimization of deep network loss surfaces.\\n\\n3. Visualizing the Loss Landscape of Neural Nets. Li et al, NeurIPS, 2018. \\n\\n\\n4. Understanding the Difficulty of Training Transformers\\n\\n5. Entropy-SGD: Biasing Gradient Descent Into Wide Valleys. Chaudhari1 et al., ICLR, 2017.\\n\\n**Other issues:**\\n\\n- We thank the reviewer for pointing these out. We have updated the paper incorporating all the comments, including the missing equation numbers, corrected references and formatting, and an explanation of the Appendix. \\n\\n- We agree with the reviewer that we consider infinite groups in our work, we modified the definition in Section 4 and explained that we approximate the integral with the Monte Carlo approach.\\n\\nWe sincerely hope that we have addressed the concerns of the reviewer satisfactorily in the revised version and would kindly ask the reviewer to update their score accordingly.\"}", "{\"comment\": \"We thank the reviewer for their time and for engaging with us in the discussion. We appreciate the reviewer's acknowledgment of the revisions made in response to all reviewers' feedback, and that the reviewers found our work and the use of the landscape concept interesting.\\n\\nIn this study, we undertake a comprehensive comparison between existing equivariant models and their unconstrained counterparts across a diverse set of benchmarks. Specifically, we evaluate Transformers, Graph Neural Networks (GNNs), Convolutional Neural Networks (CNNs), and Multi-Layer Perceptrons (MLPs) on four distinct tasks: Dynamical Systems, Motion Capture, Molecular Dynamics, and Jet Flow benchmarks. We aim to provide valuable insights into the performance, scalability, and applicability of unconstrained vs equivariant models across various domains.\\n\\nWe believe that simple approaches capable of understanding and analyzing unconstrained versus equivariant models significantly impact the field by enabling broader applicability and easier integration into existing frameworks. Finally, we acknowledge that numerous additional ideas for extending our study offer exciting opportunities for future research.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your modifications and answers to the questions. I appreciate your efforts to give the readers a better picture and clarify most of my doubts.\\n\\n1. I do not understand what the authors meant by \\\"Nonetheless, these directions cannot seamlessly leverage unconstrained architectures that do not bake symmetries into their design by simply altering the training protocol.\\\" since most of the papers attempt to leverage the unconstrained architectures (even pretrained ones) and converting them to appropriate symmetries.\\n2. I apologize for asking the authors to compare with the incorrect paper (i.e., [10]). I wanted to refer to Sec 3.2 of [13]. Please answer this (i.e., the difference between equivariance promoting regularizer of [13] and your $\\\\mathcal{L}_{equi}$), and thank you for understanding. \\n3. As per the previous version, I don't seem to find an answer to \\\"How does Equation 9 work (or make sense) when $f(x)$ is non-scalar?\\\"\\n4. There is an incorrect citation in line 302, where [13] is cited as using \\\"pretrained models.\\\"\\n\\nI am happy with the authors' answers and their new manuscript version. I await the answers to the remaining points.\"}", "{\"title\": \"Response to all reviewers and ACs\", \"comment\": \"We thank all the reviewers for their time and valuable feedback, which have helped improve our paper and confirm our contributions.\", \"the_reviewers_have_highlighted_several_strengths_of_our_work\": \"- eYaK: \\\"The paper is well-written and presents the core ideas in a clear and\\n accessible manner. Using a \\\"landscape\\\" to describe the benefits of unconstrained models is particularly novel and insightful.\\\"\\n- JGuC: \\\"The augmented loss function is generalizable across various architectures.\\n The augmented loss function requires relatively few samples to work effectively making it computationally efficient.\\\"\\n- EQn6: \\\"The experiments are conducted in different domains and examine several\\n essential aspects of the algorithm, giving more insights into the method\\n and how levels of equivariance can affect downstream task performance.\\\"\\n- JQbk: \\\"Relaxing equivariance is a valuable research direction that can break\\n through the constraints on generalization or expressive power caused by\\n strictly equivariant operations.\\\"\\n\\nOur work considers the current active topic on learning symmetries in unconstrained models versus constrained equivariant models. We consider two general architectures, Graph Neural Networks (GNNs) and Transformers, along with their equivariant versions. We test our approach on three different tasks: N-body dynamical system, Motion Capture, and Molecular Dynamics.\\n\\nHowever, we found that most of the concerns mentioned by reviewers focus on additional evaluation and analysis compared to the prior work on approximate equivariance. Specifically, there are concerns about the Motion Capture task not being fully E(3) equivariance. We have updated the manuscript (changes highlighted in red color) following their feedback. The updated version includes:\\n\\n- A new section discussing approximate equivariance and included comparisons to relevant prior work.\\n\\n- A new comparison on Motion Capture task: We compare our method against Residual Pathway Priors (RPP) [1], Projection-Based Equivariance Regularizer (PER) [2], and equivariant MLP (EMLP) [3]. As these architectures are designed based on linear layers and MLP, we apply the augmented loss to standard MLP with a similar number of layers and parameters. \\n\\n**Table: Performance on Motion Capture dataset (MSE \\u00d7 10\\u207b\\u00b2)**\\n\\n| | EMLP | RPP | PER | MLP | Data Augment. | Ours \\n|----------------------|------------|--------------|------------|--------------|------------|--------------|\\n| **Walking (Subject #35)** | 7.01 \\u00b1 0.46 | 6.99 \\u00b1 0.21 | 7.48 \\u00b1 0.39 | 6.80 \\u00b1 0.18 | 6.37 \\u00b1 0.04 | **6.04 \\u00b1 0.09** |\\n| **Running (Subject #9)** | 57.38 \\u00b1 8.39 | 34.18 \\u00b1 2.00 | 33.03 \\u00b1 0.37 | 39.56 \\u00b1 2.25 | 40.23 \\u00b1 0.94 | **32.57 \\u00b1 1.47** |\\n\\n- A new task on the Jet Flow dataset used by [4] (a two-dimensional benchmark that captures turbulent velocity fields): We apply our method to Convolutional neural network (CNN) and compare it with Relaxed Steerable Convolution (RSteer) [4] and E2CNN [5] (more details in Appendix A in the updated version).\\n\\n\\n**Table: Performance on Jet Flow dataset (RMSE)**\\n\\n| Model | Future | Domain |\\n|---------|----------------------|----------------------|\\n| E2CNN | 0.21 \\u00b1 0.02 | 0.27 \\u00b1 0.03 |\\n| RSteer | *0.17 \\u00b1 0.01* | **0.16 \\u00b1 0.01** |\\n| Ours | **0.16 \\u00b1 0.003** | *0.18 \\u00b1 0.003* |\\n\\n\\nOur new results confirm the applicability of our method to different architectures, including MLPs, CNNs, GNNs, and Transformers, across a wide range of benchmarks. We thank the reviewers again for their valuable feedback and kindly ask them to consider increasing their scores if that addresses their concerns. \\n\\n\\n\\n1. Residual Pathway Priors for Soft Equivariance Constraints. Finzi et al., NeurIPS 2021.\\n\\n2. Regularizing Towards Soft Equivariance Under Mixed Symmetries. Kim et al., ICML 2023.\\n\\n3. A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups. Finzi et al., 2021.\\n\\n4. Approximately Equivariant Networks for Imperfectly Symmetric Dynamics. Wang et al., ICML 2022.\\n\\n5. General E(2)-Equivariant Steerable CNNs. Weiler et al., NeurIPS 2019.\"}", "{\"comment\": \"We thank the reviewer for their time and comments, which have helped us improve our paper. We are also open to answer any further questions.\\n\\n\\n**Theoretical motivations and differences from data augmentation:**\\n\\n- Our theoretical motivation is that by having an adaptive parameter $\\\\beta$ on the equivariance loss, we can modulate the extent to which a model exhibits equivariance, depending on the requirements of the task. In Section 4, we introduce a measure that quantifies the level of learned equivariance in the model which we use to analyze our results.\\n\\n- The original difference from data augmentation is that we utilize an additional controlled equivariance loss together with the objective loss that both minimized during training. We consider two distinct approaches to regulate the penalty parameters $\\\\alpha$ and $\\\\beta$: constant penalty and gradual penalty. For\\n constant penalty, we assign a fixed weight to each task\\u2019s loss throughout the training process. In contrast,\\n the gradual penalty dynamically adjusts the weights of each task\\u2019s loss during training. For gradual penalty, we use GradNorm algorithm [1], which is particularly\\n suited for tasks that involve simultaneous optimization of multiple loss components, as it dynamically adjusts the weight of each loss during training. We clarified this in Section 3.2 of the paper.\\n\\n- By minimizing the equivariance loss term simultaneously with the objective function, we can control the equivariance objective depending on the parameter $\\\\beta$. This allows us to systematically adjust the degree of equivariance the model learns (Section 6.1 in the paper).\\n\\n- We noticed this decomposition is important to control the trade-off between equivariance and performance for multiple tasks (Motion Capture in Sections 6.2 and Molecular Dynamics in Section 6.3 of the paper)\\n\\n- Data augmentation can also be viewed as a special case of our method with $\\\\alpha = 0$ and $\\\\beta = 1$. We clarified this in Section 3.3 of the paper. \\n\\n**Comparing to state of the art:**\\n\\nIn this work, we consider Transformers and Graph Neural Networks (GNNs), with their equivariant versions, as our main baselines.\\n\\n- We compared Transformers against SE(3)-Transformer [2], and Geometric Algebra Transformer [3] which is a recent equivariant architecture for geometric data (Sections 6.2 and 6.3 in the paper).\\n\\n- We consider another comparison between GNN and EGNN for molecular dynamics tasks (Section 6.3).\\n\\n- While we have added new results on MLPs and CNNs architectures, we think this will be a future direction to apply our augmented loss to a broader range of unconstrained models. \\n\\nWe sincerely hope that we have addressed the concerns of the reviewer satisfactorily in the revised version and would kindly ask the reviewer to update their score accordingly.\\n\\n1. GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks. Chen et al., ICML 2018.\\n\\n\\n2. SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks. Fuchs et al., NeurIPS 2020.\\n\\n\\n3. Geometric Algebra Transformer. Brehmer et al., NeurIPS 2023.\"}", "{\"comment\": \"We thank the reviewer for their time and for engaging with us in the discussion. We appreciate the reviewer\\u2019s acknowledgment of the revisions made and the improvement of the paper.\\n\\nRegarding the MD17 benchmark, we would like to clarify that it has two common tasks in the literature:\\n\\n- Invariant Task: Predicting energies given molecular states/ positions (e.g. [1, 2, 3, 4]).\\n- Equivariant Task: Predicting molecular states/ positions after a specific number of time steps given initial states/ positions (e.g. [5, 6, 7, 8]).\\n\\n \\nIn this work, we focus on the equivariant task, following previous work on this task. Our primary objective is to compare unconstrained models with their corresponding equivariant versions (GNN vs EGNN). \\n\\nFurthermore, we have expanded the related work section to include a detailed discussion of the existing literature on approximate equivariance, with all the differences between these approaches and our proposed method.\\nIf there are any further questions or specific aspects that we have not addressed, we are happy to provide additional clarifications.\\n\\n\\n1. SchNet: A continuous-filter convolutional neural network for modeling quantum interactions. Sch\\u00fctt et al., NeurIPS 2017. \\n\\n2. Rotation Invariant Graph Neural Networks using Spin Convolutions. Shuaibi1 et al., 2021. \\n\\n3. Spherical Message Passing for 3D Graph Networks. Liu et al., ICLR 2022. \\n\\n4. Symmetry-Informed Geometric Representation for Molecules, Proteins, and Crystalline Materials. Liu et al., NeurIPS 2023.\\n\\n5. Equivariant graph mechanics networks with constraints. Huang et al., ICLR 2022.\\n\\n6. EqMotion: Equivariant Multi-Agent Motion Prediction With Invariant Interaction Reasoning. Xu et al., CVPR 2023.\\n\\n7. Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics. Wu et al., NeurIPS 2023.\\n\\n8. Equivariant Graph Neural Operator for Modeling 3D Dynamics. Xu et al., ICML 2024.\"}", "{\"comment\": \"**Questions**\\n\\n**Data Efficiency:** As we pointed out before, we apply a hard setting on N-body dynamical systems, where we train on $100$ samples and test on $5000$ samples. Our algorithm competes with equivariant architectures in both in-distribution and out-of-distribution settings (Figure 2 in the paper).\\n \\n**Loss Landscape:** We agreed with the reviewer on the limitations of using two directions to plot the shape of the loss around the minima. This still needs further study, and we plan to explore this direction in future work.\\n\\n\\nWe sincerely hope that we have addressed the concerns of the reviewer satisfactorily in the revised version and would kindly ask the reviewer to update their score accordingly.\"}" ] }
0aTIvSJ83I
Agnostic Sharpness-Aware Minimization
[ "Van-Anh Nguyen", "Quyen Tran", "Tuan Truong", "Thanh-Toan Do", "Dinh Phung", "Trung Le" ]
Sharpness-aware minimization (SAM) has been instrumental in improving deep neural network training by minimizing both the training loss and the sharpness of the loss landscape, leading the model into flatter minima that are associated with better generalization properties. In another aspect, Model-Agnostic Meta-Learning (MAML) is a framework designed to improve the adaptability of models. MAML optimizes a set of meta-models that are specifically tailored for quick adaptation to multiple tasks with minimal fine-tuning steps and can generalize well with limited data. In this work, we explore the connection between SAM and MAML in enhancing model generalization. We introduce Agnostic-SAM, a novel approach that combines the principles of both SAM and MAML. Agnostic-SAM adapts the core idea of SAM by optimizing the model toward wider local minima using training data, while concurrently maintaining low loss values on validation data. By doing so, it seeks flatter minima that are not only robust to small perturbations but also less vulnerable to data distributional shift problems. Our experimental results demonstrate that Agnostic-SAM significantly improves generalization over baselines across a range of datasets and under challenging conditions such as noisy labels or data limitation.
[ "sharpness-aware", "agnostic model", "optimizer", "MAML", "SAM" ]
https://openreview.net/pdf?id=0aTIvSJ83I
https://openreview.net/forum?id=0aTIvSJ83I
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yXeWVpRp3M", "xdoqYtJqJC", "XaOCfNgGpz", "PimRWQ2sAO", "PhC2NpUPZv" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730517360040, 1731129040693, 1730294562261, 1731944788585, 1730465202869 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6824/Reviewer_cgjs" ], [ "ICLR.cc/2025/Conference/Submission6824/Reviewer_hqiX" ], [ "ICLR.cc/2025/Conference/Submission6824/Reviewer_XCb4" ], [ "ICLR.cc/2025/Conference/Submission6824/Authors" ], [ "ICLR.cc/2025/Conference/Submission6824/Reviewer_UCdS" ] ], "structured_content_str": [ "{\"summary\": \"This paper combines MAML into SAM, proposing a new optimization scheme to improve generalization performance. The paper provides a theoretical propositions on generalization bound and gradients alignments, but it is regarded that the paper mainly focuses on verifying its generalization effectiveness numerically measured on some deep learning tasks. The paper also provides additional ablation results to support gradient alignments and momentum.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"SAM and MAML are both found to be effective for enhancing generalization performance, and that the paper is attempting to explore the intersection of these is encouraging.\", \"The paper follows a standard procedure to evaluate the proposed method (Agnostic-SAM) and shows its effectiveness in experiments.\"], \"weaknesses\": [\"There are several concerns on this paper summarized as follows.\", \"Method\", \"The main idea and motivation of this work, as its current form, remain quite random. They are two of many potential ways to improve generalization performance, but without clearly justifying why these two, the paper simply combine the two approaches and end up providing experimental results. This diminishes the technical contributions and novelty.\", \"The authors also claim that it is a \\\"framework\\\", but with it being the simple combination of SAM and MAML, it has not been rigorously evaluated to be a framework as to whether this can serve as a general scheme so it remains as an initial idea. There have been many advancements since the original SAM and MAML, but the paper only takes a proof-of-concept approach, limiting its potential.\", \"This idea requires additional computations (validation set, additional forward-backward, hyperparameter tuning) but it is unclear whether this is worth, in particular, compared to other potential ways to improve generalization.\", \"Experiments\", \"The experiments are also a bit bland without being tailored to specifically analyze any aspect of SAM and MAML simply evaluating the final performances, lacking novelty and interesting insights.\", \"The proposed scheme is only compared to naive baselines, and it is seen that the improvements are very marginal across many experiments. It is a bit critical in the sense that Agnostic-SAM makes use of the additional validation set and more computations to get validation gradients, which leaves a question that whether Agnostic-SAM is really the best possible choice for generalization.\"], \"questions\": [\"Can authors state exactly what contributions to claim from Theorems 1 and 2?\", \"(on the first Imagenet experiments) the top-1/top-5 accuracies seem quite low, why is that the case? can the authors also provide Resnet50 results? how many runs are these results? can authors provide standard errors?\", \"It is unclear the exact difference between two versions of Agnostic-SAM (Table 2 and Table 5); if it means using different base SAM (i.e., SAM or ASAM), it appears that Agnostic-SAM (with ASAM) often underperforms Agnostic-SAM (with SAM), why is it the case?\", \"How did author come up with the rules to set perturbation bounds originally?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work aims to combine Sharpness-Aware Minimization with Model-Agnostic Meta-Learning, by having worst-case robustified versions of the loss in both the inner and outer loop of meta-learning. This is then tested in the usual supervised learning setups in vision and some meta-learning benchmarks, where the method is shown to outperform the baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The motivation to combine the element of sharpness-minimization in meta-learning for better generalization makes sense. This is the operationalized well in the form of an algorithm that is shown to perform slightly better with the baselines.\", \"The method seems to be extensively tested in supervised learning setups, meta-learning scenarios, as well as those with label noise.\"], \"weaknesses\": [\"Difference wrt Abbas et al. 2022: When compared with this prior work, it is unclear what is the novelty here. The authors mention this paper, but don't bother to explain the similarities or differences. The method here looks **eerily similar** to the two-year old prior work, which is arguably better written and presented and a lot more richer. Except for little bits of analysis on congruence between gradients, I can't spot much of a methodological difference.\", \"Supervised learning experiments: In departure to their motivation, the authors start presenting results on supervised learning. I understand that this can be simulated in the meta-learning setup as well, but it comes across as confusing. Then their method involves 4 gradient computations per step, while SGD and SAM will involve 1 and 2 gradient computations respectively. So for a fairer comparison, the authors should have reported results with letting the baselines have more compute. Thus, given the excessive runtime, the method does not seem worth the effort of obtaining marginal gains.\", \"Ablation studies on the relevance for SAM in inner/outer stages of meta-learning would have been insightful: Which part benefits more from SAM? Can the authors run an ablation study?\", \"Momentum hyperparameter and it's ablation: The Table 8 would suggest that having no momentum results in better performance, but it is bizarre that the authors continue to keep using a momentum in all their results, despite of that. Especially when the improvements they report are not infrequently of the similar range.\"], \"questions\": \"besides the above, I am curious why the perturbation radii are set the way they are, ie. inner rho twice the outer rho? Was it grid -searched? is there some intuition behind this setting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose Agnostic-SAM, a new variant of the Sharpness-Aware Minimization (SAM) algorithm. Instead of only the batchwise gradient ascent perturbation step of SAM, Agnostic-SAM additionally performs a descent step on a validation batch, before computing the gradient for the final update. The authors motivate their work from a PAC-Bayes bound and report experimental results on image classification tasks (vanilla classification, noisy labels, meta-learning).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The proposed method requires an additional hyperparameter, but the authors found a way of setting it consistently throughout their experiments: $ \\\\rho_{1} = 2 \\\\rho_{2} $.\", \"Agnostic-SAM improves over baselines in most cases (even though I have doubts about the setups, see below)\", \"Combining ideas from MAML and SAM is a creative approach\"], \"weaknesses\": \"**Comparison to Baselines**\\n\\nAt its core, Agnostic-SAM changes the perturbation step of SAM by adding an additional perturbation based on gradients from a separate, smaller data batch, and the authors claim improved generalization performance. However, several methods have proposed adjustments to SAM\\u2019s perturbation model with improved generalization performance. Most similar to Agnostic-SAM, [1] adds random perturbations to the gradient-based perturbation, while [2] and [3] perform multi-step perturbations. Many more methods exist ([6,7,8,...]), but none of those appear as baselines in the experiments. The only other standard baselines are in Table 2 (ASAM), and in Table 5 (ASAM and FSAM). In the MAML experiment (Tables 6 and 7) the authors report improved performance over [9]. However, they only show the Sharp-MAML-low version from [9], even though other versions exist. In particular, the Sharp-MAML-both variant from [9] outperforms Agnostic-SAM in three out of the four reported cases. It is unclear why this is not reported or explained. \\n\\n\\n**Training time**\\n\\nThe proposed method requires two additional forward-backward passes on the validation batch, which leads to increased computational cost compared to SAM (roughly 27\\\\% wall clock time according to Table 9). While the authors briefly mention this in the conclusion, a more thorough discussion and evaluation is needed, as this affects the fairness of comparisons in the main paper.\\n\\n\\n**Hyperparameters and train settings**\\n\\nThe authors report some baseline values from the original papers, and others are reproduced with the $\\\\rho$ values from those papers. For instance, for WRN28-10 on CIFAR100 the SGD number is taken from the SAM paper [5], and the SAM number is reproduced with the same $\\\\rho$ value, but is lower than the number from the SAM paper (83.0\\\\% vs 83.5\\\\% in the SAM paper, which would outperform the reported Agnostic-SAM number). Similar observations hold for ASAM, and CIFAR100. Lower reproduced numbers can be due to different training settings and are not a problem per se if the comparison is fair, but here I have certain doubts because the optimal $\\\\rho$ value can be sensitive to the training settings and was taken from the reference papers. Further, some choices, like e.g. $\\\\rho=0.05$ for SAM in ImageNet transfer learning while $\\\\rho=0.1$ for ImageNet training from scratch look just arbitrary. Some $\\\\rho$ tuning must have taken place, since the authors even claim that _accuracies tend to decrease when reducing $\\\\rho$_. Further, it is unclear how exactly the authors came up with the choice $\\\\rho_{1}=2\\\\rho_{2}$ and if it is based on the ablation in A2, purely by intuition or additional experiments and tuning. Finally, the scope of the experiments is somewhat limited. In particular, there are no experiments with VisionTransfomers, no experiments on text data, and the only larger-scale experiments (ImageNet) are with fairly weak models (at most ResNet-34 for training from scratch).\\n\\n\\n**Theorem 1**\\n\\nThe authors present Theorem 1 as a central motivation for their method. However, this theorem is nearly identical to Theorem 1 and its proof in the original SAM paper [5], with minimal modification. As with [5], this theorem would theoretically motivate a version of SAM based on average-case rather than worst-case perturbations. The presented generalization bound implies an average-case sharpness bound, which is only subsequently upper-bounded by a worst-case sharpness bound. This limitation was already present in [5] and has since been highlighted, for example, in [4]. Furthermore, the conclusions from this theorem, i.e. why exactly it would motivate equation (3) and the final algorithm, are not understandable to me. \\n\\n\\n**Clarity**\\n\\nApart from the disconnect between Theorem 1 and the method, it is not well justified why exactly the alignment of the gradients of the perturbed points from train and validation batches would be beneficial for generalization, especially since in the experiments both batches are from the train set. Overall, the MAML perspective is unclear to me, since in the practical algorithm, train and validation batches are both sampled from the train set, and there is only one task to solve in almost all experiments. Additional confusion arises from unclear terminology (e.g. the notation $\\\\theta^{*}(\\\\theta)$ wasn\\u2019t introduced, the Taylor expansion in (7) is presented as an exact equality, etc.)\\n\\n\\n\\n[1] Yong Liu, Siqi Mai, Minhao Cheng, Xiangning Chen, Cho-Jui Hsieh, & Yang You (2022). Random Sharpness-Aware Minimization. In Advances in Neural Information Processing Systems.\\n\\n[2] Kim, H., Park, J., Choi, Y., Lee, W., and Lee, J. Exploring the effect of multi-step ascent in sharpness-aware minimization\\n\\n[3] Goncalo Mordido, Pranshu Malviya, Aristide Baratin, & Sarath Chandar (2024). Lookbehind-SAM: k steps back, 1 step forward. In Forty-first International Conference on Machine Learning.\\n[4] Maksym Andriushchenko and Nicolas Flammarion (2022). Towards Understanding Sharpness-Aware Minimization. ICML 2022\\n\\n[5] Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In ICLR, 2021\\n\\n[6] Minyoung Kim, Da Li, Shell X Hu, and Timothy Hospedales. Fisher SAM: Information geometry and sharpness aware minimisation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning\\n\\n[7] Mi, P.; Shen, L.; Ren, T.; Zhou, Y.; Sun, X.; Ji, R.; and Tao,D. 2022. Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach\\n\\n[8] Jiawei Du, Hanshu Yan, Jiashi Feng, Joey Tianyi Zhou, Liangli Zhen, Rick Siow Mong Goh, & Vincent Tan (2022). Efficient Sharpness-aware Minimization for Improved Training of Neural Networks. In International Conference on Learning Representations.\\n\\n[9] Momin Abbas, Quan Xiao, Lisha Chen, Pin-Yu Chen, and Tianyi Chen. Sharp-maml: Sharpness-aware model-agnostic meta learning\", \"questions\": \"According to Section 3.3 the goal is to align the perturbed gradients of the validation and train batches. Why do the authors then report the alignment of the unperturbed gradient of the train batch with the perturbed gradient of the validation batch in Section 5.1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces Agnostic-SAM, an optimization method that integrates insights from MAML into SAM. The approach seeks to update the model to a region that not only minimizes sharpness on the training set but also implicitly ensures strong performance on the validation set.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provides a comprehensive evaluation of Agnostic-SAM across a wide range of tasks, including image classification, transfer learning, training with label noise, and meta-learning.\", \"weaknesses\": \"1.\\tThe motivation for the problem formulation in Equation 3 is not convincingly justified. It would benefit from a clearer explanation of why this specific formulation was chosen and how it directly leads to generalization.\\n2.\\tThe paper does not sufficiently clarify how the integration of MAML\\u2019s insights with the proposed problem formulation and algorithm specifically aids generalization. A deeper theoretical or empirical justification is needed.\\n3.\\tThe proposed algorithm assumes the existence of a held-out validation set. However, in practice, the training set is used as the validation set, which diverges from the theoretical framework. This discrepancy is particularly problematic in datasets like CIFAR-10 and CIFAR-100, where the training loss converges to zero, a behavior not typically observed with a true validation set.\\n4.\\tFrom Algorithm 1, it appears that Agnostic-SAM requires double the computational time compared to SAM. In the experiments, SAM is compared by allowing SGD to run for double the iterations for fair comparison [1]. It would be fairer to allow SAM and ASAM to run for twice the iterations of Agnostic-SAM in the experiment.\\n\\n[1] Foret, P., Kleiner, A., Mobahi, H., and Neyshabur, B. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations, 2021\", \"questions\": \"Please refer to the concerns raised in the weaknesses section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0a7TRHhhcS
Preference-Driven Spatial-Temporal Counting Process Models
[ "Chao Yang", "Yiling Kuang", "Shuang Li" ]
Traditional spatial-temporal models often overlook the complex decision-making processes and social factors that shape spatial-temporal event data generated by humans. This paper introduces a novel framework that integrates choice theory with social intelligence to model and analyze counting processes, such as crime occurrences or bike-sharing activity, where the observed discrete events result from individual decisions influenced by social dynamics. Our approach aims to uncover latent human preference patterns, represented by utility functions, to capture the diverse decision-making factors within a population that result in the observed event counts. These latent factors help explain how choices—such as where and when to commit a crime—are shaped by personal preferences, environmental conditions, and social influences. By modeling the aggregate outcomes of these individual choices, we can better understand and predict patterns in counting processes. The proposed model adopts a preference-driven approach to counting data, providing interpretable insights at a detailed level. It also enables in-depth analysis of how external interventions, like law enforcement actions or policy changes, influence individual decisions and how these effects spread through the system. Empirical evaluation of crime and bike-sharing datasets demonstrates our model's ability to offer clear insights and achieve high predictive accuracy.
[ "choice model", "spatial-temporal counting process model" ]
Reject
https://openreview.net/pdf?id=0a7TRHhhcS
https://openreview.net/forum?id=0a7TRHhhcS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zuQYkI7TwM", "yNrdHjrNlJ", "x3q25hUCli", "vBjepaUanj", "pGK203a2fA", "kj5JyRES5k", "kBTodDMrZJ", "jMYZfljOst", "iYbUIGu5K9", "ba6TzCk3t0", "WIQU3LsqrS", "Vv0Irz14bx", "RdsBapBo6I", "QCEkUNj7wt", "F5XCrbQSrS", "Cys7ABXZTG", "9NujpOdb4f", "68m9bp2Vc0" ], "note_type": [ "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1734361657469, 1732377316017, 1730451842319, 1732380730279, 1732382961247, 1737524260254, 1730494914626, 1732375727721, 1732383702937, 1732378188853, 1732382070084, 1732634640188, 1732381762673, 1733229985444, 1732379435247, 1730732003011, 1730307017247, 1732378619777 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13442/Area_Chair_D4F7" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ], [ "ICLR.cc/2025/Conference/Submission13442/Reviewer_cFeC" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13442/Reviewer_VmVn" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ], [ "ICLR.cc/2025/Conference/Submission13442/Reviewer_swpn" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ], [ "ICLR.cc/2025/Conference/Submission13442/Reviewer_v1uj" ], [ "ICLR.cc/2025/Conference/Submission13442/Reviewer_swpn" ], [ "ICLR.cc/2025/Conference/Submission13442/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The paper introduces a framework to predict spatial-temporal event data generated by humans. In terms of strengths, the reviewers pointed out that the problem tackled by the paper is important, the proposed framework is methodologically sound, and the paper is well-written. In terms of weaknesses, the reviewers had concerns regarding the scalability of the framework, the claims regarding interpretability, the comparison with the state of the art, the significance of the technical contribution, and the experimental setup used in the experiments. Overall, two of the reviewers were fairly negative and two were mildly positive and, despite the authors made a significant effort in addressing the reviewers' concerns during the rebuttal period, the reviewers were not persuaded to change their overall recommendation. As a consequence, I cannot recommend to accept the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised a number of concerns, summarized in the metareview. The authors made a significant effort in addressing these concerns, conducting additional experiments during the rebuttal period. One of the reviewers did follow up, however, this reviewer did not change their overall recommendation. The other reviewers did not follow up nor change their overall recommendation. As a consequence, I cannot recommend to accept the paper.\"}", "{\"title\": \"Response to Reviewer v1uj\", \"comment\": \"Additionally, we have explored the **impact of spatial and temporal grid resolutions** on both model performance and computational costs. The results are presented in the tables below and detailed experiment results and corresponding analysis can be found in **Appendix F** in the revised paper. We use NYC crime dataset with 2985 samples and 5561 samples across 4 cases, each case with different time grid length and resolution for region. The descriptions about these 4 cases and corresponding experiment results are as below:\\n\\n- **Case-1**: 2 time grids (12 hour per grid), 25 region blocks (5 * 5)\\n- **Case-2**: 4 time grids (6 hour per grid), 100 region blocks (10 * 10)\\n- **Case-3**: 6 time grids (4 hour per grid), 225 region blocks (15 * 15)\\n- **Case-4**: 8 time grids (3 hour per grid), 400 region blocks (20 * 20)\\n\\n| Case | Case-1 | | Case-2 | | Case-3 | | Case-4 | |\\n|---|---|---|---|---|---|---|---|---|\\n| Sample Size | 2985 | 5561 | 2985 | 5561 | 2985 | 5561 | 2985 | 5561 |\\n| aRMSE | 4.72 +/- 0.25 | 3.50 +/- 0.12 | 2.29 +/- 0.13 | 2.06 +/- 0.10 | 2.75 +/- 0.06 | 2.65 +/- 0.08 | 2.12 +/- 0.17 | 1.98 +/- 0.15 |\\n| MAPE | 147.67 +/- 8.33 | 127.83 +/- 6.25 | 104.43 +/- 5.33 | 100.72 +/- 5.67 | 110.67 +/- 2.65 | 105.33 +/- 3.57 | 101.30 +/- 3.76 | 98.74 +/- 5.12 |\\n| Time Cost (h) | 0.5346 | 1.0295 | 1.7971 | 3.8970 | 3.2025 | 5.5152 | 6.2380 | 8.9856 |\\n\\nWith finer resolutions for the time grid and region blocks, the model is expected to capture event patterns more accurately and with greater granularity in time and location. However, our experimental results indicate that increasing the fine-grained spatial and temporal resolution does not significantly enhance the model performance. For instance, when comparing Case-4 with Case-2 using a dataset of 2985 samples, despite Case-4 having finer resolution, the mean aRMSE (over three different seed) only decreases from 2.29 to 2.12, and the mean MAPE decreases from 104.43 to 101.30. This could be attributed to the overly detailed partitioning of time and space, leading to insufficient instances of events at each time-location pair, thereby impacting the model's effectiveness. Further validation of this observation is evident when varying the sample size within the same case. For Case-2, increasing the sample size from 2985 to 5561 results in a more significant improvement in model performance, with the aRMSE decreasing from 2.29 to 2.06 and the MAPE decreasing from 104.43 to 100.72. This underscores the substantial impact of increasing dataset size on model effectiveness. Hence, the results presented in our paper reflect a trade-off in selecting resolution based on balancing model performance and the level of detail in capturing time-location pair patterns.\"}", "{\"summary\": \"This paper presents a novel framework that integrates choice theory with social intelligence to model spatial-temporal counting processes, such as crime occurrences and bike-sharing activities. By capturing latent human preference patterns through utility functions, the model aims to provide deeper insights into the mechanisms driving these events. Empirical evaluations using crime and bike-sharing datasets show that the proposed model offers high predictive accuracy and interpretability compared to existing methods, though potential limitations and future research directions are not extensively discussed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. **Innovative Approach**: The paper introduces an innovative framework that integrates choice theory with social intelligence to model spatial-temporal counting processes. This approach addresses the complex decision-making processes and social factors influencing human-generated event data, such as crime occurrences and bike-sharing activities.\\n2. **Interpretable Insights**: The model provides interpretable insights by uncovering latent human preference patterns through utility functions. This feature helps in understanding the underlying mechanisms driving the observed event counts, which is valuable for both academic and practical purposes.\\n3. **Predictive Performance**: Empirical evaluations using crime and bike-sharing datasets show that the proposed model achieves good predictive accuracy compared to existing methods. The results indicate that the model can effectively predict event patterns and offer useful insights.\\n4. **Theoretical Foundation**: The paper derives a generalization bound that is independent of the number of latent classes, providing a theoretical foundation for the model's robustness and reliability. This theoretical contribution adds to the academic value of the work.\\n5. **Practical Flexibility**: The model demonstrates flexibility in handling different types of spatial-temporal data and can incorporate external interventions, making it adaptable to various real-world scenarios.\", \"weaknesses\": \"1. **Interpretability Validation**: While the model emphasizes interpretability, this claim is not fully supported with detailed case studies or qualitative analyses. More concrete examples and validation are needed to ensure that the insights provided are actionable and meaningful. Without such validation, the interpretability aspect, though highlighted as a strength, remains somewhat abstract and less convincing.\\n2. **Computational Efficiency**: The paper does not extensively address the computational efficiency of the model. Practical applications often involve large-scale datasets, and understanding the model's scalability and resource requirements is crucial. Without this information, it is challenging to determine the feasibility of deploying the model in real-world settings, which could limit its practical utility.\\n3. **Future Research Directions**:\\nThe paper does not clearly outline future research directions or potential extensions of the model. Discussing these aspects would provide a clearer path for advancing the field and addressing current limitations. Identifying open questions and suggesting avenues for further investigation would enhance the paper's contribution and encourage ongoing research in this area.\", \"questions\": \"1. How do hyperparameter changes, such as learning rate, regularization parameters, and the number of mixture components, affect the model's performance?\\n2. In what ways can the model be tested on a variety of datasets with different spatial and temporal characteristics to assess its generalizability?\\n3. How can cross-validation and out-of-sample testing be conducted to ensure the model's stability and consistency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer cFeC\", \"comment\": \"In response to your questions:\\n\\n**Q1**: We use training negative log-likelihood, test aRMSE, test MAPE, and training time cost as **metric** to select appropriate hyper-parameters. Taking NYC-crime dataset with 732 samples as an example. And we divide the time horizon and region into 4 time slots and 100 region blocks.\\n\\n- **Impact of Learning Rate**\\n| Learning Rate | 1e-4 | 5e-4 | 1e-3 | 5e-3 | 1e-2 | 5e-2 |\\n|---|---|---|---|---|---|---|\\n| Mean Neg. Log-likelihood | -4.980 | -5.112 | -5.059 | -4.839 | -4.854 | -5.010 |\\n| aRMSE | 2.38 +/- 0.04 | 2.25 +/- 0.08 | 2.34 +/- 0.05 | 2.53 +/- 0.26 | 2.57 +/- 0.64 | 2.48 +/- 0.40 |\\n| MAPE | 108.33 +/- 5.12 | 106.90 +/- 4.67 | 107.94 +/- 4.27 | 115.47 +/- 7.32 | 113.50 +/- 7.83 | 110.33 +/- 6.12 |\\n| Time Cost (h) | 0.8354 | 0.7239 | 0.5859 | 0.5630 | 0.6239 | 0.5253 |\\n\\nWe vary the learning rate through grid search from 1e-4 to 5e-2 and our current choice is 1e-3. When using a relatively low learning rate like 1e-4, the mean converged negative log-likelihood (lower is better) and the predictive accuracy remains nearly the same as our current selection (1e-3), but with increased computational cost for convergence. With a larger learning rate, the model performance starts to become unstable. For instance, when the learning rate is set to 5e-2, the mean aRMSE and mean MAPE for the prediction task across three random runs increase to 2.48 and 110.33, respectively, with larger standard deviations of 0.40 and 6.12. And the mean converged negative log-likelihood is also larger than the result achieved by our current choice of learning rate. Meanwhile, the training time for the model to converge does not decrease significantly. Therefore, we opted for a learning rate of 1e-3 in our experiments.\\n\\n- **Impact of Number of Experts**\\n| Number of Experts | 1 | 2 | 3 | 4 | 5 |\\n|---|---|---|---|---|---|\\n| Mean Neg. Log-likelihood | -4.885 | -4.934 | -5.059 | -5.219 | -5.320 |\\n| aRMSE | 2.52 +/- 0.08 | 2.46 +/- 0.10 | 2.34 +/- 0.05 | 2.25 +/- 0.12 | 2.28 +/- 0.08 |\\n| MAPE | 113.85 +/- 5.23 | 110.54 +/- 5.67 | 107.94 +/- 4.27 | 105.83 +/- 4.50 | 104.73 +/- 4.75 |\\n| Time Cost (h) | 0.4087 | 0.4860 | 0.5859 | 0.8216 | 1.2621 |\\n\\nFor the number of experts, we vary it from 1 to 5. As our current choice for the NYC dataset is 3, decreasing it would degrade the model performance. However, increasing the number of experts does not yield a substantial improvement in model effectiveness but notably escalates computational costs. For instance, raising the number of experts from 3 to 5 would extend the training time from 0.5859 hours to 1.2621 hours because of increased learnable model parameters, while the negative log-likelihood, aRMSE, and MAPE metrics only marginally shift from -5.059 to -5.320, 2.34 to 2.28, and 107.94 to 104.73, respectively. Hence, to strike a balance between model performance and training efficiency, we opt to maintain the number of experts at 3 for the NYC dataset. We also employed a similar strategy when selecting the number of experts for other datasets.\\n\\n**Q3**: To ensure a model's stability and consistency, we can use k-fold cross-validation and out-of-sample testing. To implement K-Fold Cross-Validation, we need to split our crime or mobike datasets into k subsets and train the model on k-1 folds and validate on the remaining fold. We need to repeat this process k times, each time using a different fold as the validation set. After training the model using cross-validation like k-fold cross-validation, we can use out-of-sample testing to test it on entirely new, unseen data. In the prediction tasks detailed in our paper, we have employed both cross-validation and out-of-sample testing methodologies to validate and ensure the robustness of our model's performance.\"}", "{\"title\": \"Response to Reviewer swpn\", \"comment\": \"**W6**: We summarize our response into following aspects:\\n- **Are there specific sequential criminal events in the datasets? If so, does the proposed method successfully retrieve these sequences?**: Specific sequential criminal events, such as repeated offenses by the same suspect, are rare in our crime dataset. However, the Mobike dataset documents multiple instances of bike rentals by the same user. The table below illustrates the mixture patterns adjusted by utility scores for different experts corresponding to 4 times bike rental records by the same user (user id 10344) on August 7, 2016, in Shanghai. The results indicate that under the same expert, the mixture patterns are similar, whereas they vary significantly across different experts. The mixture patterns generated by expert-1 exhibit obviously higher values compared to those produced by other experts. This demonstrates our model's ability to capture diverse user behaviors, underlying thinking processes, and sequential decision-making patterns in bike rental scenarios.\\n| Time-Location Pair (time, (Lat., Lon.)) | Expert-1 | Expert-2 | Expert-3 | Expert-4 |\\n|---|---|---|---|---|\\n| 12h-16h, (31.185, 121.468) | 2.68*1e-2 | 1.48*1e-2 | 1.65*1e-2 | 1.63*1e-2 |\\n| 16h-20h, (31.185, 121.468) | 3.25*1e-2 | 1.83*1e-2 | 1.92*1e-2 | 1.88*1e-2 |\\n| 16h-20h, (31.185, 121.468) | 3.25*1e-2 | 1.83*1e-2 | 1.92*1e-2 | 1.88*1e-2 |\\n| 20h-24h, (31.185, 121.468) | 2.40*1e-2 | 1.76*1e-2 | 1.96*1e-2 | 1.80*1e-2 |\\n\\n- **If each crime is independent, how are the datasets suitable for examining causal relationships?**: We assume there are K latent mixtures, and each crime belongs to one of the mixtures independently. The causal relationships are revealed by the latent mixtures.\\n\\n- **How does the model demonstrate that its improvements are due to modeling the human decision process? Could simple statistics identify criminal hotspots at specific time slots to yield similar results to those in Fig. 2?**: For Figure 2, simple statistics consider each time interval independently. However, our model jointly considers all the time intervals within a day and the interaction effects between time and location. To test the difference between the model performance of our model and statistical methods, we have added extra experiments. For statistical methods, we use the frequency of the previous day to predict the next day\\u2019s events. Seeing from the results shown in the table below, our model outperforms the statistical method on prediction tasks, indicating the incorporation of human decision processes of our proposed model indeed improves model performance. \\n\\n| Dataset | NYC Crime | | Chicago Crime | | Shanghai Mobike | |\\n|---|---|---|---|---|---|---|\\n| Metric | aRMSE | MAPE | aRMSE | MAPE | aRMSE | MAPE |\\n| Statistical Method | 4.75 +/- 0.00 | 128.93 +/- 0.00 | 7.34 +/- 0.00 | 132.54 +/- 0.00 | 7.82 +/- 0.00 | 175.54 +/- 0.00 |\\n| Ours* | 2.34 +/- 0.05 | 107.94 +/- 4.27 | 2.19 +/- 0.04 | 87.38 +/- 3.12 | 3.28 +/- 0.04 | 154.62 +/- 7.05 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper is about the prediction problem for spatial-temporal event data generated by humans. The authors introduced a framework integrating choice theory with social intelligence to model and analyze counting processes. The authors further conducted experiments on several real-world spatio-temporal datasets, and empirical evaluation of crime and bike-sharing datasets demonstrated that the proposed model could achieve the best performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The studied forecasting problem of spatio-temporal events is very important, interesting, and of high value in the real world.\\n2. The presentation is overall good, and the organization makes the paper easy to read and comprehend.\\n3. The authors select two representative metrics, aRMSE and MAPE, on which the proposed method achieves the best performance among all these models.\", \"weaknesses\": \"1. The datasets are small, leading to convincing results and conclusions. Although the authors have considered three datasets, NYC Crime, Chicago Crime, and Shanghai Mobike, for evaluation, the scales of these datasets are quite limited. There are only less than 1000 events on the first two datasets, which makes us wonder whether the proposed method can be used in real-world applications where the dataset may be very huge.\\n2. The technical contribution of the proposed method is questionable. The proposed method introduces a strategy of MoE, which is widely used in model ensembling and limits the contribution of the whole framework. In other words, it is very likely to improve performance by adding the MoE module. In short, the proposed solution is a bit straightforward.\\n3. Figure 2, Figure 3, and Figure 4 require improvement. Observing some informative and insightful conclusions from these figures is very hard since the grids are coarse-grained.\", \"questions\": \"Please answer the questions corresponding to the weaknesses mentioned above.\\n1. Why use these small datasets for evaluation? What about the actual value of the proposed method when applied to large-scale datasets?\\n2. How do you explain the performance improvement of the MoE module and the relation between it and the overall performance improvement?\\n3. How about the performance improvement when we have fine-grained spatial grids?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer v1uj\", \"comment\": \"We thank the reviewer for your careful reading and insightful comments! In response to the reviewers' suggestion, we have expanded our evaluation to include multiple datasets with varying sample sizes to test the **scalability of our proposed model**. Detailed experiment results and corresponding analysis can be found in **Appendix E** in the revised paper.\\n\\n- **Scalability experiments on NYC Crime Dataset**\\n| Sample Size | 732 | 1840 | 2985 | 4016 | 5561 |\\n|---|---|---|---|---|---|\\n| aRMSE | 2.34 +/- 0.05 | 2.25 +/- 0.06 | 2.29 +/- 0.13 | 2.10 +/- 0.08 | 2.06 +/- 0.10 |\\n| MAPE | 107.94 +/- 4.83 | 103.32 +/- 4.67 | 104.43 +/- 5.33 | 101.83 +/- 4.33 | 100.72 +/- 5.67 |\\n| Time Cost (h) | 0.5859 | 1.3252 | 1.7971 | 2.5254 | 3.8970 |\\n\\n- **Scalability experiments on Chicago Crime Dataset**\\n| Sample Size | 861 | 2434 | 3207 | 4578 | 5321 |\\n|---|---|---|---|---|---|\\n| aRMSE | 2.19 +/- 0.04 | 2.56 +/- 0.08 | 2.12 +/- 0.12 | 2.25 +/- 0.08 | 1.93 +/- 0.10 |\\n| MAPE | 87.38 +/- 3.12 | 94.67 +/- 3.83 | 86.23 +/- 5.22 | 90.94 +/- 4.95 | 85.50 +/- 4.12 |\\n| Time Cost (h) | 0.7785 | 1.6995 | 2.2787 | 3.2730 | 4.0410 |\\n\\n- **Scalability experiments on Shanghai Mobike Dataset**\\n| Sample Size | 1457 | 3347 | 5054 | 6602 | 8786 |\\n|---|---|---|---|---|---|\\n| aRMSE | 4.27 +/- 0.12 | 3.57 +/- 0.08 | 3.06 +/- 0.12 | 2.85 +/- 0.12 | 2.88 +/- 0.67 |\\n| MAPE | 172.50 +/- 8.48 | 158.33 +/- 8.25 | 151.54 +/- 7.33 | 148.34 +/- 7.50 | 148.67 +/- 8.19 |\\n| Time Cost (h) | 1.4245 | 2.4924 | 3.8226 | 5.5305 | 7.1056 |\\n\\nFor NYC dataset, the date time was divided into four time slots, and the New York area was segmented into 100 small area blocks\\nbased on longitude and latitude. We use the same temporal and spatial resolution for Chicago dataset. For Shanghai Mobike, we divide the date time into 6 time grids and divide the Shanghai area into 100 area blocks. Across all experiments, as the sample size increases, both evaluation metrics, aRMSE and MAPE, decrease for the prediction task. Taking the NYC dataset as an example, with an increase in data size from the current 732 samples to 5561 samples, mean aRMSE (over three different seeds) decreases to 2.06, and MAPE decreases to 100.72. The training time required for model convergence remains within acceptable limits on the current computing infrastructure (details provided in Appendix D). For the NYC dataset with 5561 samples, the model converges in only 3.8970 hours. Even for the large-scale Mobike dataset with 8786 samples, our model converges and achieves good inference and prediction results after training for approximately 7.1056 hours. For all these datasets, the model's prediction accuracy increases with larger sample size, requiring more training time, yet within reasonable range.\"}", "{\"title\": \"Response to Reviewer swpn\", \"comment\": \"**Continuing with W6**:\\n- **For our claim of human decision process**: In our experiment, we consider the severity of the crime, suspect race, and suspect gender as key categorical features. The utility score for each time-location pair is determined through regression analysis involving these features, with the coefficients reflecting the magnitude of impact on the utility score. For each expert, the utility score patterns are different, reflecting different preference patterns. We have taken the NYC Crime dataset as an example and the results are shown in heatmaps in Appendix G of our revised paper.\\n\\nTo illustrate the \\\"human decision process\\\" in a more clear way, we take an example for the top-5 crime event time location-pair for different races of expert-1 with largest utility score. The results are reported in the table below and the time-location pairs are recorded in \\\"_time, (Lat., Lon.)_\\\" format. From the results one can see that distinct patterns emerge based on the time and location of crime events across various racial groups. Black suspects tend to engage in criminal activities during the early morning or late night hours, while White Hispanic suspects are less active in criminal activities during the early morning hours. The timing of criminal activities among suspects of other races is more varied. At the regional level, the concentration areas for criminal activities among suspects of different races vary significantly. By incorporating social norms or individual information of suspects into the utility function, our approach better captures the role of individual differences and human decision-making processes in engaging in criminal activities.\\n\\n| Race | W | AI/AN | WH | B | BH | A/PI |\\n|---|---|---|---|---|---|---|\\n| Top-1 | 0h-6h, (40.577, -73.845) | 12h-18h, (40.808, -73.895) | 12h-18h, (40.770, -73.794) | 0h-6h, (40.731, -73.995) | 18h-24h, (40.693, -73.744) | 0h-6h, (40.538, -74.196) |\\n| Top-2 | 18h-24h, (40.616, -74.045) | 0h-6h, (40.693, -73.995) | 18h-24h, (40.847, -73.845) | 0h-6h, (40.731, -73.895) | 0h-6h, (40.616, -74.146) | 0h-6h, (40.808, -73.845) |\\n| Top-3 | 0h-6h, (40.693, -73.845) | 0h-6h, (40.693, -73.995) | 6h-12h, (40.693, -73.945) | 0h-6h, (40.847, -73.895) | 0h-6h, (40.770, -73.995) | 18h-24h, (40.616, -73.795) |\\n| Top-4 | 12h-18h, (40.654, -73.945) | 18h-24h, (40.770, -73.895) | 6h-12h, (40.847, -73.845) | 18h-24h, (40.731, -73.995) | 6h-12h, (40.808, -73.845) | 12h-18h, (40.577, -73.995) |\\n| Top-5 | 18h-24h, (40.654, -73.895) | 12h-18h, (40.885, -73.895) | 12h-18h, (40.847, -73.845) | 0h-6h, (40.808, -73.845) | 18h-24h,(40.770, -73.945) | 12h-18h, (40.731, -73.945) |\"}", "{\"title\": \"Response to Reviewer VmVn\", \"comment\": \"We are grateful for your careful reading and useful suggestions! Below, we will address your concerns one by one.\", \"in_response_to_your_mentioned_weaknesses_and_questions\": \"**W1 & Q1**: Thanks for your insightful suggestion! We have added a scalability experiment to test the performance of our proposed model in large-scale datasets. Please refer to our response for **Reviewer v1uj** for the complete results across all three dataset we used in our paper. We have also incorporated the scalability experiments in **Appendix E** in our revised paper. Here we provided the corresponding analysis to address your concerns.\\n\\nAcross all experiments, as the dataset sample size increases, both evaluation metrics, aRMSE and MAPE, generally decrease for the prediction task. For the NYC dataset, with an increase in data size from the current 732 samples to 5561 samples, mean aRMSE over three random runs decreases from 2.34 to 2.06, and mean MAPE decreases from 107.94 to 100.72. For the Chicago dataset, with an increase in data size from the current 861 samples to 5321 samples, mean aRMSE over three random runs decreases from 2.19 to 1.93, and mean MAPE decreases from 87.38 to 85.50. For the Mobike dataset, to obtain datasets with varying sample sizes while ensuring consistency in the differences between sample sizes across datasets, we reselected data from August 2, 2016, to August 6, 2016. The results indicate that with an increase in data size from 1457 samples (one day) to 8786 samples (five days), the mean aRMSE over three random runs sharply decreases from 4.27 to 2.88, and the mean MAPE decreases from 172.50 to 148.67. \\n\\nThe training time required for model convergence remains within acceptable ranges on the current computing infrastructure (details provided in Appendix D). For the large-scale NYC dataset with 5561 samples, the model converges in only 3.8970 hours. For the large-scale Chicago dataset with 5321 samples, the model converges in around 4.0410 hours. Even for the large-scale Mobike dataset with 8786 samples, our model converges and achieves good inference and prediction results after training for approximately 7.1056 hours, showcasing good scalability of our proposed model.\\n\\n**W2 & Q2**: Our approach aims to uncover the preference-driven decision-making processes. The mixture of experts is used to capture the thinking patterns of different latent groups. In addition to the Mixture of Experts module, our model incorporates choice theory and a sparse gating function to effectively capture the intricate patterns in spatial-temporal event data. Notably, for certain datasets, the mixture patterns may not be distinctly observable. Our model can capture these subtle differences and thus enhance the model performance. For instance, as illustrated in Figure 7 of the appendix, the first two mixtures from the Chicago dataset exhibit considerable similarity, while the third mixture contributes minimally with weight 0.1056. Guided by these complex mixtures, as demonstrated in Table 2, for the Chicago dataset, our model's performance significantly surpasses that of the baseline models.\"}", "{\"title\": \"Response to Reviewer swpn\", \"comment\": \"**W3**: The matrices A and B encode the learnable embedded time-location information for all pairs across the various latent classes. The benefit of the intermediate matrix decomposition-based embedding is to capture the interaction impacts of different temporal spatial grids by utilizing the cross product of A and B. This approach enhances flexibility compared to merely combining A and B.\\n\\n**W4**: Our sparse gating function generates a sparse vector, which consists of weights for each temporal-spatial grid. This vector retains only the top-k most significant choices with nonzero weights. The ranking of these weights reflects the preference order among choices. The probability for each temporal spatial grid depends on both the ranking and the utility. The ranking mechanism is used to reduce the dimension, i.e., from a high dimension choice set to a much smaller k dimension choice set. Since we aim to use our modeling framework to effectively represent the human decision-making process. The ranking mechanism delineates the initial step in the decision-making process \\u2014 selecting a subset of candidates. The second step involves evaluating the utility of each choice. Although each event is treated independently, it emerges from underlying mixtures exhibiting diverse causal patterns.\\n\\n**W5**: The method we applied in the paper for dividing time and regions is common practice, and the granularity of our time and region divisions is generally similar to methods used in other published works. ST-HSL [5] applies a 3 km by 3 km spatial grid unit to New York City and Chicago, resulting in the generation of 256 and 168 disjoint spatial regions, respectively, slightly more than our 100 disjoint spatial regions. SpatialRank [6] partitions the Chicago area into 500 m by 500 m square cells, while HintNet [7] divides the whole state of Iowa into 5 km by 5 km grids (the area closely aligns with our partitioning) and designates a single day as the appropriate time interval. Following the same approach, we partitioned the time horizon and regions in a similar manner.\\n\\nWe would like to highlight that, according to the official Mobike dataset documentation, the peak hours on workdays are specified as 16:00-20:00. To better capture the peak hour pattern, we designated this time frame as a distinct time slot, resulting in the division of the entire day into 6 time slots, each spanning 4 hours.\\n\\nMoreover, we have also added more experiments to investigate the impact of spatial and temporal grid resolutions on both model performance and computational costs. Please refer to our responses for **Reviewer v1uj** and **Reviewer VmVn, W3 & Q3**. Detailed experiment results and corresponding analysis can also be found in **Appendix F** in our revised paper.\\n\\n[5] Li, Z., Huang, C., Xia, L., Xu, Y., & Pei, J. (2022, May). Spatial-temporal hypergraph self-supervised learning for crime prediction. In 2022 IEEE 38th international conference on data engineering (ICDE) (pp. 2984-2996). IEEE. \\\\\\n[6] An, B., Zhou, X., Zhong, Y., & Yang, T. (2024). SpatialRank: urban event ranking with NDCG optimization on spatiotemporal data. Advances in Neural Information Processing Systems, 36. \\\\\\n[7] An, B., Vahedian, A., Zhou, X., Street, W. N., & Li, Y. (2022). Hintnet: Hierarchical knowledge transfer networks for traffic accident forecasting on heterogeneous spatio-temporal data. In Proceedings of the 2022 SIAM International Conference on Data Mining (SDM) (pp. 334-342). Society for Industrial and Applied Mathematics.\"}", "{\"title\": \"Thank you for the responses\", \"comment\": \"Thank you for providing the clarifications. To enhance the paper's reproducibility and clarity, it would be helpful to incorporate the above discussions into the manuscript. Additionally, the justifications for matrices A and B could be further strengthened. It might be beneficial to include additional discussions or experiments to better illustrate the necessity and impact of their flexibility.\"}", "{\"title\": \"Response to Reviewer swpn\", \"comment\": \"We first thank reviewer swpn for the insightful comments, especially for the questions about the details of our proposed model, which helped us to clarify our paper further. We would like to address the concerns one by one.\", \"in_response_to_your_mentioned_weaknesses\": \"**W1**: We summarize our response into following aspects:\\n\\n- **Explain more about spatial and positional embeddings**: We treat spatial and position information for different embeddings as the work of [1]. In our approach, spatial information, akin to patch information in [1] which breaks down the image (area region in our problem) into smaller patches (blocks in our problem), treating them as individual tokens similar to words in text, allowing the embedding to capture spatial information. The position information provides information about the locations of patches (blocks) within the region, helping the model understand the relative positions of different patches (blocks) and enabling it to learn spatial dependencies across the image (area region). \\n\\n- **Which model does each expert use?**: As indicated in Equation 5, we employ the same model architecture but utilize different parameters for each expert. \\n\\n- **Key individual features**: Social norms and other pertinent factors are encapsulated within the utility function, formulated in a regression form. We have provided experiment fundings and detailed analysis in our response to **W6**. You can found our reply below.\\n\\n[1] Dosovitskiy, A. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.\\n\\n**W2**: : Thanks for your suggestion about analyzing more related works. We will ensure to incorporate the reference you listed into the related work section of our paper. However, the codebases for the referenced methodologies have not been released. To ensure a fair comparison with state-of-the-art models, we have incorporated three more SOTA baseline models.\\n\\n- **HintNet** [2]: It performs a multi-level spatial partitioning to separate sub-regions with different risks and learns a deep network model for each level using spatio-temporal and graph convolutions.\\n- **STNSCM** [3]: a causality-based interpretation model for the bike flow prediction.\\n- **UniST** [4]: a universal model designed for general urban spatio-temporal prediction across a wide range of scenarios.\\n\\n| Dataset | NYC Crime | | Chicago Crime | | Shanghai Mobike | |\\n|---|---|---|---|---|---|---|\\n| Metric | aRMSE | MAPE | aRMSE | MAPE | aRMSE | MAPE |\\n| HintNet [2] | 2.53 +/- 0.28 | 110.45 +/- 4.67 | 3.28 +/- 0.16 | 108.48 +/- 3.58 | 3.50 +/- 0.15 | 168.23 +/- 8.83 |\\n| STNSCM [3] | 2.58 +/- 0.20 | 112.33 +/- 5.25 | 2.93 +/- 0.33 | 98.57 +/- 4.37 | 3.12 +/- 0.26 | 152.38 +/- 9.23 |\\n| UniST [4] | 2.26 +/- 0.12 | 105.23 +/- 5.32 | 2.84 +/- 0.13 | 96.23 +/- 5.21 | 3.35 +/- 0.33 | 162.29 +/- 8.43 |\\n| Ours* | 2.34 +/- 0.05 | 107.94 +/- 4.27 | 2.19 +/- 0.04 | 87.38 +/- 3.12 | 3.28 +/- 0.04 | 154.62 +/- 7.05 |\\n\\nCompared to the newly added baselines, our model surpasses all baselines in prediction tasks on the Chicago dataset. In the NYC dataset, our model demonstrates competitive performance with UniST and outperforms the other two models. The mean aRMSE and mean MAPE across three random runs of our model on the NYC dataset are 2.34 and 107.94, slightly higher than UniST's 2.26 and 105.23, respectively. However, our model exhibits a lower standard deviation, indicating better stability in our predictions. For the Mobike dataset, our model remains the second-best performing model, with a mean aRMSE and mean MAPE of 3.28 and 154.62, respectively, trailing only behind STNSCM with 3.12 and 152.38. Additionally, our model also exhibits lower standard deviation than the STNSCM model. Overall, our model significantly outperforms the models considered in our paper. Against the newly introduced state-of-the-art baselines, our model also achieves stable predictions and competitive results across all three datasets.\\n\\n[2] An, B., Vahedian, A., Zhou, X., Street, W. N., & Li, Y. (2022). Hintnet: Hierarchical knowledge transfer networks for traffic accident forecasting on heterogeneous spatio-temporal data. In Proceedings of the 2022 SIAM International Conference on Data Mining (SDM) (pp. 334-342). Society for Industrial and Applied Mathematics. \\\\\\n[3] Deng, P., Zhao, Y., Liu, J., Jia, X., & Wang, M. (2023, June). Spatio-temporal neural structural causal models for bike flow prediction. In Proceedings of the AAAI conference on artificial intelligence (Vol. 37, No. 4, pp. 4242-4249). \\\\\\n[4] Yuan, Y., Ding, J., Feng, J., Jin, D., & Li, Y. (2024, August). Unist: a prompt-empowered universal model for urban spatio-temporal prediction. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 4095-4106).\"}", "{\"comment\": \"Dear Reviewers, Senior Area Chairs, Area Chairs, and Program Chairs,\\n\\nWe are deeply grateful for the insightful comments and suggestions, which are invaluable for enhancing our work. We are excited that the reviewers hold positive feedback and find our work \\u201cThe paper is methodologically sound, well-structured and provides comprehensive explanations of its key components, and contributes significantly to spatial-temporal modeling.\\u201d (Reviewer v1uj), \\u201cThe studies forecasting problem of spatial-temporal events is very important, interesting, and of high value in the real world.\\u201d (Reviewer VmVn), \\u201cThis paper provides an innovative approach with interpretable insights, good predictive performance, theoretical foundation, and practical flexibility\\u201d (Reviewer cFeC), and \\u201cThe experiments are thorough and the writing is fluent and easy to understand\\u201d (Reviewer swpn).\\n\\nIn our response, we have included additional clarifications and experiments. To ensure transparency and clarity, we have outlined our main responses as follows:\\n\\n- **Scalability and computational cost**: We have expanded our evaluation to include multiple datasets with varying sample sizes to test the scalability of our proposed model. Detailed experiment results and corresponding analysis can be found in **Appendix E** in the revised paper. Encouragingly, for all these datasets, the model's prediction accuracy increases with larger sample size, requiring more training time, yet within reasonable range.\\n\\n- **Impact of spatial and temporal resolution**: We have explored the impact of spatial and temporal grid resolutions on both model performance and computational costs. In **Appendix F** of the revised paper, we have provided the detailed experiment results and corresponding analysis. The results presented in our paper reflect a trade-off in selecting resolution based on balancing model performance and the level of detail in capturing time-location pair patterns.\\n\\n- **Hyper-parameter selection**: We have used metrics such as training negative log-likelihood, training time, and prediction accuracy to select hyper-parameters such as learning rate and and number of experts. Experimental results can be found in our response to Reviewer cFeC. \\n\\n- **Elaboration on human decision modeling**: In our experiments, the utility score for each time-location pair is determined through regression analysis involving key human features (such as gender and race), with the coefficients reflecting the magnitude of impact on the utility score. For each expert, the utility score patterns are different, reflecting different preference patterns. We have taken the NYC Crime dataset as an example and the results are shown in heatmaps in **Appendix G** of our revised paper. By incorporating social norms or individual information of human into the utility function, our approach better captures the role of individual differences and human decision-making processes in engaging in human activities.\\n\\n- **Comparison with extra baselines**: As suggested by Reviewer swpn, we compare our model with three more SOTA baselines (HintNet[1], STNSCM[2], and UniST[3]). Compared with newly added SOTA baselines, our model achieves stable predictions and competitive results across all three datasets\\n\\nIn addition to the academic contributions, our method holds practical significance. Our proposed approach can capture how individual choices influence the distribution of events over time and space, helping identify overlooked tendencies in human activities. By recognizing preference patterns among different populations, it enables tailored planning and plays a crucial role in guiding human decision-making processes like crime control. We believe this innovative approach has the potential to inspire future research endeavors.\\n\\n[1] An, B., Vahedian, A., Zhou, X., Street, W. N., & Li, Y. (2022). Hintnet: Hierarchical knowledge transfer networks for traffic accident forecasting on heterogeneous spatio-temporal data. In Proceedings of the 2022 SIAM International Conference on Data Mining (SDM) (pp. 334-342). Society for Industrial and Applied Mathematics. \\\\\\n[2] Deng, P., Zhao, Y., Liu, J., Jia, X., & Wang, M. (2023, June). Spatio-temporal neural structural causal models for bike flow prediction. In Proceedings of the AAAI conference on artificial intelligence (Vol. 37, No. 4, pp. 4242-4249). \\\\\\n[3] Yuan, Y., Ding, J., Feng, J., Jin, D., & Li, Y. (2024, August). Unist: a prompt-empowered universal model for urban spatio-temporal prediction. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 4095-4106).\"}", "{\"title\": \"Response to Reviewer cFeC\", \"comment\": \"We appreciate that reviewer cFeC has a positive impression of our work. To address your concerns about the our method, we provide point-wise responses as follows.\", \"in_response_to_your_mentioned_weaknesses\": \"**W1**: Take crime datasets for examples, we consider the suspect features such as severity of the crime event, suspect race, and suspect gender as key categorical features to build the utility functions. And the utility score for each time-location pair is determined through regression analysis involving these features, with the coefficients reflecting the magnitude of impact on the utility score. For each expert, the utility score patterns are different, reflecting different preference patterns. By doing this, our paper's claim regarding the interpretability of the human decision process can be effectively demonstrated, as distinct suspect features reveal varying crime preferences across different time-location pairs. In our response to **Reviewer swpn, W6**, we present the experimental results. Please refer to our response for a comprehensive view of the results and corresponding analysis.\\n\\n**W2 & Q2**: Please refer to our responses for **Reviewer v1uj**. We have also incorporated detailed experiment results and corresponding analysis in **Appendix E** and **Appendix F** of our revised paper.\\n\\n**W3**: Future research could integrate attention mechanisms into the gating function of choice model. This integration may enhance the model\\u2019s flexibility, enabling it to capture a broader range of and long-term information through neural networks. To improve the interpretability, we can also consider integrating the attention mechanisms into the utility function. These ideas serve as promising starting points which will be the future research direction to improve our work.\"}", "{\"summary\": \"This paper presents a new spatial-temporal counting process model that integrates choice theory and social intelligence to capture human decision-driven event occurrences, such as crime rates and bike-sharing usage. The core idea is to use latent utility functions to represent diverse decision-making factors and to apply a mixture-of-experts model with a sparse gating function for adaptive selection. The model aims to reveal underlying patterns in counting processes, providing both predictive power and interpretability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is methodologically sound, with a well-defined approach supported by both theoretical and empirical analyses. The experimental setup is robust, including multiple real-world datasets, and the model's performance is compared against established baselines to highlight its predictive strength.\\n\\nThe paper is well-structured and provides comprehensive explanations of its key components, including the latent utility functions, mixture-of-experts model, and gating function. Diagrams and formulas aid in clarifying complex concepts, making the model's framework accessible for readers. \\n\\nThis framework contributes significantly to spatial-temporal modeling, especially in domains where human decision-making drives event occurrences. By enabling a nuanced understanding of preference-driven behavior and offering predictive power, the model has applications in fields like criminology, urban planning, and shared mobility systems.\", \"weaknesses\": \"The use of mixture-of-experts and the sparse selection mechanism may raise concerns regarding computational scalability when applied to large-scale, high-dimensional spatial-temporal data. While the model performs well on mid-sized datasets, it is unclear if the sparse gating function and multiple experts could handle significantly larger spatial grids or finer temporal resolutions without substantial computational costs. A discussion on computational efficiency or optimization strategies, such as parallelization, would strengthen the model\\u2019s applicability to broader scenarios.\", \"questions\": \"Please refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims at including human decision processes and social influence to observe criminal event counts. The proposed model is ambitious to include multiple human decision-making aspects, but the details of formulation and examination are missing. The experimental setup needs further reference to show its practicality.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. The experiments are conducted with three real-datasets.\\nS2. The writing is fluent and easy to understand.\", \"weaknesses\": \"W1. Overall, the major concerns are that the paper may not be self-contained and appears disconnected. First, although Fig. 1 visualizes the structure of the proposed model, most details explaining each part are not presented. For instance, what are the differences between spatial and position info? Which model does each expert use? Second, although the abstract and introduction state that social norms, environmental cues, and various other factors are considered, there is no corresponding formulation in Section 3. Finally, the experimental results do not validate these claims either. It is suggested to connect the claims with detailed descriptions in the methods and experiment sections.\\n\\nW2. Please include up-to-date related works in top journals [1][2][3]. Moreover, half of the comparative baselines in the experiment section were published more than 10 years ago, which may be too outdated for fair comparisons. It is suggested to compare with newer methods instead.\\n[1] Weichao Liang, Zhiang Wu, Zhe Li, Yong Ge: CrimeTensor: Fine-Scale Crime Prediction via Tensor Learning with Spatiotemporal Consistency. ACM Trans. Intell. Syst. Technol. 13(2): 33:1-33:24 (2022)\\n[2] Shuai Zhao, Ruiqiang Liu, Bo Cheng, Daxing Zhao: Classification-Labeled Continuousization and Multi-Domain Spatio-Temporal Fusion for Fine-Grained Urban Crime Prediction. IEEE Trans. Knowl. Data Eng. 35(7): 6725-6738 (2023)\\n[3] Weichao Liang, Jie Cao, Lei Chen, Youquan Wang, Jia Wu, Amin Beheshti, Jiangnan Tang: Crime Prediction With Missing Data Via Spatiotemporal Regularized Tensor Decomposition. IEEE Trans. Big Data 9(5): 1392-1407 (2023)\\n\\nW3. The definitions of matrices A and B on line 227, page 5, and the purpose of formulating them are unclear. Specifically, what do the two matrices embed, respectively? Additionally, right before introducing these matrices, the model already includes positional, spatial, temporal, and feature embeddings. An alternative approach might be to directly feed these four embeddings to the experts, rather than combining them with the two matrices to avoid additional computational overhead. This raises questions about the necessity, purpose, and benefit of the intermediate matrix decomposition-based embedding method compared to a straightforward alternative.\\n\\nW4. Please clarify the \\u201cranking\\u201d concept in the gating function, starting from line 251 on page 5. Equations 7, 8, and the loss function at line 274 resemble a cross-entropy formulation, which is a classification-based metric rather than a ranking one. Additionally, I am uncertain whether ranking is appropriate in this scenario. Specifically, while predicting the time and place of a crime, a top-1 ranking for occurrence may not directly indicate that a crime is happening, as the probability could still be low. Therefore, relying on ranking rather than probability prediction may lead to false alarms and overreactions.\\n\\nW5. The practicality of the experimental setup is questionable. In the New York Crime and Chicago Crime datasets, each city is divided into 100 areas, and daytime is segmented into 4 time slots. However, it is unclear how large each area is after division. Is there evidence or a reference supporting that the 100-block granularity is beneficial for real-world law enforcement? Similarly, dividing daytime into four 6-hour slots may not be sufficiently granular. Is there a reference justifying this setup? Furthermore, it would be interesting to see the model\\u2019s performance at finer granularities, with smaller areas and shorter time slots.\\n\\nW6. The experimental results may not fully examine the authors' claims. While modeling the \\u201chuman decision process\\u201d is a key focus, it is unclear how this is tested in the experiments. Are there specific sequential criminal events in the datasets? If so, does the proposed method successfully retrieve these sequences? How does the model demonstrate that its improvements are due to modeling the human decision process? Otherwise, if each crime is independent, how are the datasets suitable for examining causal relationships? In this context, could simple statistics identify criminal hotspots at specific time slots to yield similar results to those in Fig. 2? It is recommended to elaborate further on human decision modeling in the experiments.\", \"questions\": \"Please refer to W3 to W6.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer VmVn\", \"comment\": \"**W3 & Q3**: Your concern about the resolution is constructive! Employing a finer resolution undoubtedly enhances the model's persuasiveness by capturing more detailed time-location patterns for event occurrences. But we want to emphasize that the method we applied in the paper for dividing time and regions is common practice, and the granularity of our time and region divisions is generally similar to methods used in other published works. ST-HSL [1] applies a 3 km by 3 km spatial grid unit to New York City and Chicago, resulting in the generation of 256 and 168 disjoint spatial regions, respectively, slightly more than our 100 disjoint spatial regions (as we reported in our paper). SpatialRank [2] partitions the Chicago area into 500 m \\u00d7 500 m square cells, while HintNet [3] divides the whole state of Iowa into 5 km by 5 km grids (the area closely aligns with our partitioning) and divides a single day as the appropriate time interval.\\n\\nMoreover, we have explored the **impact of spatial and temporal grid resolutions** on both model performance and computational costs. Please refer to our response for **Reviewer v1uj** for the complete experiment results. Detailed experiment results and corresponding analysis can also be found in **Appendix F** in our revised paper.\\n\\nWith finer resolutions for the time grid and region blocks, the model is expected to capture event patterns more accurately and with greater granularity in time and location. However, our experimental results indicate that increasing the fine-grained spatial and temporal resolution does not significantly enhance the model performance. For instance, when comparing Case-4 with Case-2 using a dataset of 2985 samples, despite Case-4 having finer resolution, the mean aRMSE (over three different runs) only decreases from 2.29 to 2.12, and the mean MAPE decreases from 104.43 to 101.30. This could be attributed to the overly detailed partitioning of time and space, leading to insufficient instances of events at each time-location pair, thereby impacting the model's effectiveness. Further validation of this observation is evident when varying the sample size within the same case. For Case-2, increasing the sample size from 2985 to 5561 results in a more significant improvement in model performance, with the mean aRMSE decreasing from 2.29 to 2.06 and the mean MAPE decreasing from 104.43 to 100.72. It addresses the substantial impact of increasing dataset size on model effectiveness. Therefore, the results presented in our paper reflect a trade-off in selecting resolution based on balancing model performance and the level of detail in capturing time-location pair patterns. \\n\\n[1] Li, Z., Huang, C., Xia, L., Xu, Y., & Pei, J. (2022, May). Spatial-temporal hypergraph self-supervised learning for crime prediction. In 2022 IEEE 38th international conference on data engineering (ICDE) (pp. 2984-2996). IEEE. \\\\\\n[2] An, B., Zhou, X., Zhong, Y., & Yang, T. (2024). SpatialRank: urban event ranking with NDCG optimization on spatiotemporal data. Advances in Neural Information Processing Systems, 36. \\\\\\n[3] An, B., Vahedian, A., Zhou, X., Street, W. N., & Li, Y. (2022). Hintnet: Hierarchical knowledge transfer networks for traffic accident forecasting on heterogeneous spatio-temporal data. In Proceedings of the 2022 SIAM International Conference on Data Mining (SDM) (pp. 334-342). Society for Industrial and Applied Mathematics.\"}" ] }
0Zot73kfLB
GVFi: Learning 3D Gaussian Velocity Fields from Dynamic Videos
[ "Jinxi Li", "Ziyang Song", "Bo Yang" ]
In this paper, we aim to model 3D scene geometry, appearance, and physical information just from dynamic multi-view videos in the absence of any human labels. By leveraging physics-informed losses as soft constraints or integrating simple physics models into neural networks, existing works often fail to learn complex motion physics, or doing so requires additional labels such as object types or masks. In this paper, we propose a new framework named **GVFi** to model the motion physics of complex dynamic 3D scenes. The key novelty of our approach is that, by formulating each 3D point as a rigid particle with size and orientation in space, we choose to directly learn a translation rotation dynamics system for each particle, explicitly estimating a complete set of physical parameters to govern the particle's motion over time. Extensive experiments on three existing dynamic datasets and two newly created challenging synthetic and real-world datasets demonstrate the extraordinary performance of our method over baselines in the task of future frame extrapolation. A nice property of our framework is that multiple objects or parts can be easily segmented just by clustering the learned physical parameters. Our datasets and code will be released at https://github.com/
[ "Dynamic Reconstruction", "Physics", "Motion Extrapolation" ]
Reject
https://openreview.net/pdf?id=0Zot73kfLB
https://openreview.net/forum?id=0Zot73kfLB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zkOfrGRp1Q", "vfKKqTvIFc", "vKnBJlBoJ3", "tMkkpB47Ek", "stMFe17tOJ", "rx84bWvvo3", "qh6l8SgZAN", "ovzx0A2uXw", "ogLVAI2xip", "i2lW03LGb9", "eOCb0ZHIrg", "YWvYtpSxRd", "UfnN7smUU5", "Tz7VrMCxsZ", "TdebGuN5cs", "SMLwRj82yn", "RqtXmp7b7k", "PnWyAB9Uxe", "OwRcsdoLLL", "OK7tquylLh", "OIqCYaKzwR", "MxJTKJSBop", "LZKIvzucD9", "KLAGrUvqJe", "EygXPI4gPn", "DxUFUVhQQG", "B0OOuOVL51", "A3ThcQhK1K", "8NTaIs484I", "75p4AEekMv", "6hA5TBzdNE", "5IuF9Al7fS", "2ZqRRXnnwl" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733169521791, 1737523472430, 1733169627336, 1733047579395, 1733012288701, 1730544569098, 1732527929775, 1732528511029, 1734749283831, 1732528258727, 1730714544820, 1732981476637, 1732527447184, 1732527550467, 1732683725927, 1732526531362, 1733035098466, 1733169268695, 1733047669278, 1732982611255, 1732527165003, 1732529066473, 1730849187211, 1732528402541, 1733169969801, 1732528217562, 1732624506305, 1732527144080, 1730148054834, 1732982316317, 1732755913945, 1732528876370, 1732528014063 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Reviewer_3rJ7" ], [ "ICLR.cc/2025/Conference/Submission1874/Reviewer_i7XQ" ], [ "ICLR.cc/2025/Conference/Submission1874/Reviewer_3rJ7" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Area_Chair_8fDp" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Reviewer_j1SG" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Reviewer_3rJ7" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Reviewer_3rJ7" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Reviewer_gAFC" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Reviewer_i7XQ" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Reviewer_gAFC" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ], [ "ICLR.cc/2025/Conference/Submission1874/Authors" ] ], "structured_content_str": [ "{\"comment\": \"**Comment \\\\#2: On the Significance and Implementation of Extrapolation**\\n\\n**In my previous comments, I suggested that under the strong physical priors assumed in your method, reconstruction followed by editing could achieve extrapolation over short time windows. While this may not be the optimal solution, it could still perform well given the datasets and experimental settings in the paper.**\\n\\n**The proposed strategy does not rely on manual segmentation and more human knowledge than your method. Instead, it leverages the learned 4D representations to estimate the motion attributes of individual Gaussian primitives. These representations inherently model the motion trajectories of Gaussians in canonical space, making it possible to extrapolate motion without additional segmentation or human intervention.**\\n\\n**However, since the revised version has de-emphasized these physical assumptions, it is crucial to demonstrate the effectiveness of your method in more practical, real-world scenarios. Currently, the experiments are constrained by datasets with limited temporal frames and overly simple motion patterns. If your method can demonstrate satisfactory performance on more diverse and realistic datasets, such as the two I suggested, I would be more inclined to reconsider my evaluation.**\\n\\n**Response \\\\#2:** First, we appreciate the reviewer for suggesting a potential method \\\\``estimating motion attributes or trajectories, making it possible to extrapolate\\\", yet the core issue is _what attributes to learn and how to extrapolate_. In fact, our method can be regarded as one instantiation of such a pipeline: ``estimating motion attributes (translation rotation parameters), followed by second- or third- order extrapolation via our Equation (4)\\\". \\n\\nSecond, regarding the assumption, again, we hope to reach a consensus that, as shown in Equation (4), our basic physical assumption is up to a second-order relationship, and it can be easily extended to a third-order relationship. \\n\\nThird, regarding the experiments, here is the summary: we evaluate our method on five synthetic and real-world datasets comprising multiple rigid and deformable scenarios. Exactly following the fair and extensive evaluation protocols established by baselines in the community, our method clearly achieves the best performance in future frame extrapolation and motion segmentation compared to all existing works. \\n\\nNevertheless, the field of 3D physical learning is still in its infancy, and the benchmarking datasets are yet to be large-scale and diverse, ideally being similar as ImageNet for image classification in the coming years. Apparently, such a desired goal shared by the reviewer is hardly achievable in a single paper like ours. We respectfully hope the reviewer reconsiders your evaluation from the actual progress of the specific field of study.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"**Comment \\\\#3: On Feedback from Other Reviewers**\\n\\n**While all reviewers, myself included, acknowledge the substantial effort and extensive work you have invested in this paper, there remains a shared concern regarding the method\\u2019s contribution and novelty. The current experimental design and results do not adequately establish the method\\u2019s generalizability. Although you have constructed a new dataset, the scenarios included are overly simplistic and fail to significantly enhance the diversity of the experiments.**\\n \\n**In summary, I commend your dedication to improving the manuscript and your thoughtful responses to the reviewers\\u2019 feedback. I have also invested significant time and effort into carefully reading, analyzing, and reflecting on your paper and its revisions, as I genuinely hope to see this work evolve into a more complete and robust contribution.**\\n\\n**Despite the extensive experiments, my concerns remain regarding the method\\u2019s reliance on above-mentioned implicit assumptions, the limited representativeness of the datasets, and the lack of a significant contribution over existing methods. If you can further demonstrate that your method is not constrained by these implicit assumptions and represents a generalizable approach to learning motion dynamics, with meaningful extrapolation results in real-world scenarios, I would be happy to reassess the paper and consider raising my score.**\\n\\n**Response \\\\#3:** We highly appreciate the reviewer's time and effort on our paper over the past weeks. Your insightful comments and thought-provoking questions have significantly improved our manuscript. \\n\\nWe also thank all four responsible and professional reviewers for acknowledging our substantial efforts invested in this paper. Like all researchers, we hope that our own contributions - namely, the neat idea of learning physical parameters and two new datasets - could be truly valued by reviewers. \\n\\nLastly, regarding the mentioned implicit assumption, to be short, we do not apply an implicit assumption of overall rigidity, because we do not assume that neighboring particles should predict the same physical parameters in our design, but leaving the network to learn per-particle physical parameters separately from RGB images, thus adapting to either rigid or non-rigid motions automatically. Ultimately, the learned per-particle physical parameters enable meaningful future frame extrapolation.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your detailed response to my previous comments. I appreciate the substantial effort you have devoted to addressing the reviewers\\u2019 feedback and improving the manuscript. However, I must point out that the revisions and your responses have not fully resolved my primary concerns, which are centered around the following points: (1) the validity of the physical assumptions underlying your method, and (2) the representativeness of the datasets used to evaluate the effectiveness and generalizability of the proposed method, and (3) the limited contribution of the proposed method. Below, I provide a detailed discussion of your responses.\\n\\n---\\n\\n1. On the Physical Assumptions\\n\\nYou stated that \\u201cwe never assume motion without external forces.\\u201d However, this is inconsistent with the original manuscript (Lines 219\\u2013221), where it is explicitly stated: \\u201cHere we make an assumption that there is no additional force involved after t = 0.\\u201d While this statement has been removed in the revised version, and you clarified that the assumption is now one of \\u201cconstant force\\u201d rather than \\u201cno additional force,\\u201d it suggests that the method\\u2019s motivation and foundation are still fundamentally rooted in this assumption.\\n\\nWhile I acknowledge your clarification that the method assumes rigidity at the level of the Gaussian primitives rather than the entire object. However, from a physical perspective, if the components of an object are rigidly connected and subjected to a constant force, the overall motion is typically expected to be rigid as well. This is consistent with your experimental results, where the method performs well under scenarios of overall rigid motion but struggles significantly in cases of non-rigid motion, such as those illustrated in Figure 13 (a scenario that is notably different from other dataset in the paper and more representative of real-world conditions). This suggests that while the method explicitly models local rigidity at the component level, its practical effectiveness appears more aligned with an implicit assumption of overall rigidity.\\n\\nIf the goal of the method is to generalize beyond rigid motion, the reliance on this implicit assumption under constant force significantly limits its applicability to more complex, real-world motion scenarios. \\n\\n---\\n\\n2. On the Significance and Implementation of Extrapolation\\n\\nIn my previous comments, I suggested that under the strong physical priors assumed in your method, reconstruction followed by editing could achieve extrapolation over short time windows. While this may not be the optimal solution, it could still perform well given the datasets and experimental settings in the paper.\\n\\nThe proposed strategy does not rely on manual segmentation and more human knowledge than your method. Instead, it leverages the learned 4D representations to estimate the motion attributes of individual Gaussian primitives. These representations inherently model the motion trajectories of Gaussians in canonical space, making it possible to extrapolate motion without additional segmentation or human intervention. \\n\\nHowever, since the revised version has de-emphasized these physical assumptions, it is crucial to demonstrate the effectiveness of your method in more practical, real-world scenarios. Currently, the experiments are constrained by datasets with limited temporal frames and overly simple motion patterns. If your method can demonstrate satisfactory performance on more diverse and realistic datasets, such as the two I suggested, I would be more inclined to reconsider my evaluation.\\n\\n---\\n\\n3. On Feedback from Other Reviewers\\n\\nWhile all reviewers, myself included, acknowledge the substantial effort and extensive work you have invested in this paper, there remains a shared concern regarding the method\\u2019s contribution and novelty. The current experimental design and results do not adequately establish the method\\u2019s generalizability. Although you have constructed a new dataset, the scenarios included are overly simplistic and fail to significantly enhance the diversity of the experiments. \\n\\n---\\n\\nIn summary, I commend your dedication to improving the manuscript and your thoughtful responses to the reviewers\\u2019 feedback. I have also invested significant time and effort into carefully reading, analyzing, and reflecting on your paper and its revisions, as I genuinely hope to see this work evolve into a more complete and robust contribution. \\n\\nDespite the extensive experiments, my concerns remain regarding the method\\u2019s reliance on above-mentioned implicit assumptions, the limited representativeness of the datasets, and the lack of a significant contribution over existing methods. If you can further demonstrate that your method is not constrained by these implicit assumptions and represents a generalizable approach to learning motion dynamics, with meaningful extrapolation results in real-world scenarios, I would be happy to reassess the paper and consider raising my score.\"}", "{\"comment\": \"Thank you to the authors for their thorough rebuttal. The additional experiments assuage my concerns as to if the method actually reasonably works and can handle non-cherry picked scenes.\\n\\nOverall, I think the actual core contribution is very incremental but a neat idea nonetheless and possibly the basis for future work moving the ball forward on thus further. Additionally, the presentation has been improved but still needs a lot more work. Given the additional experiments, I have changed my review score from a 3 to a 6 on the condition that the authors further _significantly_ work on the paper presentation --- honestly, a significant rewrite might be in order. There's no enforcement mechanism to ensure the authors actually do this, but given how hard they worked for the rebuttal experiments I feel like it's possible they can also clean up the writing.\"}", "{\"summary\": \"The paper introduces GVFi, a framework for modeling the motion physics of complex dynamic 3D scenes using multi-view RGB videos without requiring additional annotations such as object shapes, types, or masks.\\nBuilding on Deformable3DGS, GVFi incorporates constraints based on the laws of classical mechanics to guide motion predictions, ensuring that the Gaussian deformation estimated by the MLP aligns more closely with physical principles. By assuming that motion adheres to the laws of classical mechanics and explicitly learning the associated motion parameters, GVFi is capable of performing effective extrapolation rendering, allowing it to predict frames beyond the observed time span. Experimental results show that GVFi significantly outperforms existing methods, particularly excelling in future frame extrapolation tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Modeling the motion of Gaussians through a Translation Rotation Dynamics System grounded in classical mechanics, resulting in a concise and conceptually elegant framework with solid mathematical and physical foundations.\\n2. Introducing an effective method to train the motion parameters of the Translation Rotation Dynamics System, enabling the accurate estimation of translation and rotation dynamics for each particle in the scene.\\n4. By explicitly learning motion parameters under classical mechanics, enabling effective extrapolation to unobserved frames and presenting potential for generation tasks that require plausible future frames in dynamic 3D scenes.\\n3. The proposed approach is validated on two tasks, demonstrating superior performance compared to previous methods, highlighting its effectiveness in modeling motion dynamics in 3D scenes.\", \"weaknesses\": \"1. The contributions of this work are somewhat incremental, as most of the methodological design heavily overlaps with the baseline method, Deformable3DGS [1]. The key difference lies in the incorporation of dynamical principles, primarily to enable extrapolation capabilities rather than introducing fundamentally novel approaches.\\n2. The proposed motion modeling framework is overly restrictive, relying on an strong assumption of no external forces, disregarding energy transfer processes, and lacking the ability to handle non-rigid or nonlinear motion. These limitations significantly reduce the model\\u2019s applicability to real-world physics.\\n3. Due to its reliance on idealized assumptions and limited scope, the model struggles to handle complex, real-world motion dynamics where varied forces, interactions, and non-rigid behaviors are prevalent, limiting its utility for practical applications in diverse environments.\\n\\n[1] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction. CVPR, 2024.\", \"questions\": \"1. Based on the methodology, there seem to be three possible approaches for interpolation rendering: (1) directly using $f_{defo}$ to predict the deformation at the given time $t$, (2) progressively calculating the Gaussian deformation at the given time $t$ from time 0 using the motion parameters predicted by $f_{trd}$, or (3) following the steps described in lines L261-L269. Which approach was used in the experiments? Are the results consistent across these three methods?\\n2. For extrapolation rendering according to lines L261-L269, it seems feasible to use either the second or third approach from question 1. Which method was actually used by the authors? If the third approach was used, how does it perform over longer extrapolation periods? Could the authors provide visual results for extrapolations that extend beyond the time span covered in the dataset?\\n3. The choice of baseline methods for comparison appears limited. For a comprehensive evaluation, it would be beneficial to compare against state-of-the-art methods in dynamic scene reconstruction, such as 4D-GS[2] and more recent work like E-D3DGS [3], which both have architectures similar to Deformable3DGS but differ in their motion representation. Could the authors verify if the proposed Translation Rotation Dynamics System can be integrated into these methods and whether it would yield similar performance gains?\\n4. The authors claim that their framework is a general approach for modeling motion physics in complex dynamic 3D scenes. However, the datasets used, with only 60 frames in total, limit the complexity and extent of motion. Could the authors validate this claim by testing on more challenging synthetic and real-world datasets, such as the ParticleNeRF and PanopticSports datasets, to provide a more comprehensive evaluation of the framework\\u2019s effectiveness on complex scenes?\\n5. In the ablation study, the authors provide a rationale for their choice of $\\\\delta t$, which is somewhat reasonable. However, this conclusion is based on results from only one dataset, which may not be sufficient, as each dataset could exhibit different motion characteristics. Could the authors clarify how to select an appropriate $\\\\delta t$ in practice across diverse datasets?\\n6. The experimental details are insufficient, particularly regarding training time, required resources, storage size, and rendering speed. Could the authors provide more comprehensive information on these aspects?\\n7. Please ensure that all abbreviations and technical terms are clearly defined, with full explanations and necessary citations. In the related work section, it would be helpful to explicitly clarify the differences from relevant works wherever possible.\\n\\n[2] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.\\n\\n[3] Jeongmin Bae, Seoha Kim, Youngsik Yun, Hahyun Lee, Gun Bang, and Youngjung Uh. Per- gaussian embedding-based deformation for deformable 3d gaussian splatting. In Proceedings of the European Conference on Computer Vision (ECCV), 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q3: Due to its reliance on idealized assumptions and limited scope, the model struggles to handle complex, real-world motion dynamics where varied forces, interactions, and non-rigid behaviors are prevalent, limiting its utility for practical applications in diverse environments.**\\n\\n**A3:** Following our response in A2, we further validate the applicability of our updating scheme in Equation 4. In particular, we conduct the following two groups of experiments to learn complex dynamics: 1) sliding window based incremental learning (also requested by the reviewer **j1SG**), and 2) ablations of first-/third- order updating schemes in Equation 4 (also requested by the reviewer **i7XQ**). \\n\\n**1) incremental learning**: We conduct experiments on three self-propelled objects from the Dynamic Object Dataset. To be specific, we first feed time $t=0\\\\sim 0.15$ to train the network, and evaluate novel view interpolation on $t=0\\\\sim 0.15$, future frame extrapolation on $t=0.15\\\\sim 0.30$. Next, we include $t=0.15\\\\sim 0.30$ to train, and evaluate novel view interpolation on $t=0\\\\sim 0.30$, future frame extrapolation on $t=0.30\\\\sim 0.45$. We keep adding a time interval of 0.15 till we train from $t=0\\\\sim 0.75$, and extrapolate from $t=0.75\\\\sim 0.9$. \\n\\nThe following Table 3 (Table 5 in revised paper) shows quantitative results. It can be seen that DefGS suffers from overfitting the previous timestamps and its interpolation performance decreases, while our model can stably adapt to new observations and achieve excellent past and future frame predictions. This means that even though the internal forces are changing for self-propelled objects, our model can easily adapt to new observations.\\n\\n**Table 2:** _Quantitative results (PSNR) of incremental learning._\\n| Interpolation | $0.15\\\\rightarrow0.30$ | $0.30\\\\rightarrow0.45$ | $0.45\\\\rightarrow0.60$ | $0.60\\\\rightarrow0.75$ | $0.75\\\\rightarrow0.90$ | Average |\\n|---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| DefGS | 39.386 | 38.745 | 35.818 | 34.531 | 27.904 | 35.277 |\\n| **GVFi (Ours)** | 40.032 | 40.706 | 41.013 | 40.466 | 39.971 | 40.438 |\\n| **Extrapolation** | $0.15\\\\rightarrow0.30$ | $0.30\\\\rightarrow0.45$ | $0.45\\\\rightarrow0.60$ | $0.60\\\\rightarrow0.75$ | $0.75\\\\rightarrow0.90$ | **Average** |\\n| DefGS | 23.438 | 21.360 | 19.989 | 19.670 | 17.629 | 20.417 |\\n| **GVFi (Ours)** | 29.958 | 32.260 | 31.384 | 29.527 | 28.958 | 30.417 |\\n\\n**2) first-/third- order ablations**: We conduct ablation experiments for our Equation 4 on Dynamic Object Dataset and Dynamic Multipart Dataset. \\n\\nThe following Table 2 (Table 8 in revised paper) shows the results. We can see that, in Dynamic Object Dataset which has several self-propelled objects whose internal forces tend to change over time, not surprisingly, the third-order variant performs better. Nevertheless, due to the inherent over-parametrization, the third-order scheme tends to learn excessive rotation information to represent simple acceleration motions, thus incurring inferior performance on the Dynamic Multipart Dataset which does not have self-propelled objects.\\n\\n**Table 3:** _Quantitative results of ablation studies about 3 orders of Taylor expansion in Equation 4 on Dynamic Multipart Dataset and Dynamic Object Dataset._\\n| | **Dynamic Multipart Dataset** | | | | | | **Dynamic Object Dataset** | | | | | |\\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| $1^{st}$-order | 34.776 | 0.990 | 0.013 | 26.729 | 0.976 | 0.018 | 38.892 | **0.995** | **0.005** | 28.536 | **0.983** | 0.012 |\\n| $2^{nd}$-order | 34.807 | **0.991** | **0.011** | **30.721** | **0.986** | **0.012** | 38.788 | **0.995** | 0.006 | 28.758 | 0.982 | **0.011** |\\n| $3^{rd}$-order | **35.268** | **0.991** | 0.012 | 30.503 | 0.985 | 0.013 | **39.164** | **0.995** | **0.005** | **29.378** | **0.983** | **0.011** |\\n\\nOverall, our method can actually tackle complex dynamics just by applying a sliding window based incremental learning, or simply extending to high-order relationships if needed. In the revised paper, all these new materials have been updated.\"}", "{\"comment\": \"**Q6: Questions 3) The choice of baseline methods for comparison appears limited. For a comprehensive evaluation, it would be beneficial to compare against state-of-the-art methods in dynamic scene reconstruction, such as 4D-GS[2] and more recent work like E-D3DGS [3], which both have architectures similar to Deformable3DGS but differ in their motion representation. Could the authors verify if the proposed Translation Rotation Dynamics System can be integrated into these methods and whether it would yield similar performance gains?**\\n\\n**A6:** Thank you for the valuable suggestions. As requested, in Tables 1\\\\&2 of the revised paper (also shown in the following Table 5), we have added the recent 4DGS an E-D3DGS as additional baselines on all four datasets. We can see that, not surprisingly, both methods clearly fail to predict meaningful future frames due to the lack of physics learning, though they can achieve excellent performance for past frame interpolation. \\n\\nAs requested, to demonstrate the flexibility of our framework, we also adopt 4DGS as our auxiliary deformation field, denoted as GVF$i_{4dgs}$. As showing in the following Table, our GVFi$_{4dgs}$ (hyperparameters not tuned due to the limited time for rebuttal) also achieves very good results for future frame extrapolation on most datasets. \\n\\nIn the revised paper, all these new results are added to Section 4.1, further demonstrating the superiority of our method. \\n\\n**Table 5:** *Quantitative results of new baselines and our GVFi$_{4dgs}$ on all four datasets.*\\n| | **Dynamic Multipart Dataset** | | | | | | **Dynamic Object Dataset** | | | | | |\\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| E-D3DGS | 26.180 | 0.955 | 0.062 | 18.615 | 0.904 | 0.114 | 28.075 | 0.963 | 0.049 | 18.526 | 0.923 | 0.087 |\\n| 4D-GS | **37.021** | **0.992** | _0.014_ | 20.564 | 0.935 | 0.067 | _37.285_ | _0.986_ | _0.020_ | 20.354 | 0.950 | 0.052 |\\n| GVFi$_{4dgs}$ | _36.542_ | _0.991_ | 0.015 | **30.801** | _0.983_ | _0.016_ | 35.961 | 0.985 | 0.021 | _28.316_ | _0.978_ | _0.023_ |\\n| GVFi | 34.807 | _0.991_ | **0.011** | _30.721_ | **0.986** | **0.012** | **38.788** | **0.995** | **0.006** | **28.758** | **0.982** | **0.011** |\\n| | **Dynamic Indoor Scene Dataset** | | | | | | **NVIDIA Dynamic Scenes Dataset** | | | | | |\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| E-D3DGS | 29.267 | 0.874 | 0.222 | 20.374 | 0.772 | 0.307 | _20.848_ | _0.541_ | _0.532_ | 20.301 | 0.565 | 0.522 |\\n| 4D-GS | _29.381_ | _0.889_ | _0.212_ | 21.107 | 0.793 | 0.274 | 19.411 | 0.462 | _0.532_ | 22.510 | 0.703 | 0.408 |\\n| GVFi$_{4dgs}$ | 27.932 | 0.860 | 0.252 | _31.590_ | _0.909_ | _0.194_ | 18.995 | 0.448 | 0.544 | _22.706_ | _0.714_ | _0.400_ |\\n| GVFi | **32.202** | **0.928** | **0.089** | **34.556** | **0.964** | **0.046** | **26.943** | **0.891** | **0.102** | **29.388** | **0.938** | **0.067** |\"}", "{\"metareview\": \"The paper addresses the problem of modeling both 3D scene geometry, appearance, and physical information (this last one being the primary novelty). The paper adopts the DefGS representation for 3D scene geometry, appearance, and motion and then adds a translation-rotation dynamics system module along with an optimization strategy. The authors note that this allows for future frame extrapolation. The paper evaluate on several datasets (Dynamic Object, Dynamic Indoor Scene, NVIDIA Dynamic Scene, Dynamic Multipart, GoPro) and demonstrate strong quantitative results, in particular with respect to the extrapolation setting.\\n\\nThe main strength is the interesting problem setting and the quantitative performance on the extrapolation setting. The main concern from reviewers is the lack of technical novelty, in particular over the DefGS framework used, and the strong assumptions being made for the translation-rotation dynamic system which can limited the applicability of this system - noted by reviewers 3rJ7, i7XQ, and j1SG. None of the reviewers advocated to champion for this paper since they felt that the limited impact and novelty made the paper borderline.\\n\\nI slightly lean towards rejection, though acceptance as poster would be fine.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer 3rJ7 rated the paper a 3. All other reviewers rated the paper a 6 but none felt it was strong enough to advocate for it. Many reviewers shared similar concerns, limited novelty and limited impact due to the strong assumptions being made. In almost all the author rebuttals, the authors emphasized their extrapolation results (as well as noting that their method works for a different representation (4dgs).\"}", "{\"comment\": \"**Q4: Questions 1) Based on the methodology, there seem to be three possible approaches for interpolation rendering: (1) directly using $f_{defo}$ to predict the deformation at the given time $t$, (2) progressively calculating the Gaussian deformation at the given time $t$ from time 0 using the motion parameters predicted by $f_{trd}$, or (3) following the steps described in lines L261-L269. Which approach was used in the experiments? Are the results consistent across these three methods?**\\n\\n**A4:** We use the third approach in the experiments. As requested, we also provide interpolation results using the three approaches in the following Table 4 (Table 9 in revised paper). We can see that, the first and the third approaches are not strictly consistent, but achieve very similar performance. However, for the second approach, the performance clearly decreases, we hypothesize that this is due to accumulated errors in the autoregressive process.\\n\\nIn the revised paper, we have clarified our interpolation settings in lines 368-373 of Section 4.1, and added these new results in Table 9 of Appendix A.9. \\n\\n**Table 4:** _Quantitative results of different interpolation approaches of our method on all four datasets._\\n| | | **Dynamic Multipart** | | | **Dynamic Object** | | | **Indoor Scene** | | | **Dynamic Scenes** | |\\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| (1)$f_{defo}$ | **35.040** | **0.991** | **0.011** | 38.406 | **0.995** | **0.005** | **32.569** | **0.930** | **0.088** | **26.951** | **0.891** | **0.102** |\\n| (2)$f_{trd}$ | 30.310 | 0.984 | 0.017 | 33.527 | 0.991 | 0.009 | 31.776 | 0.926 | 0.092 | 25.899 | 0.875 | 0.118 |\\n| (3)$f_{defo}+f_{trd}$ | 34.807 | **0.991** | **0.011** | **38.788** | **0.995** | 0.006 | 32.202 | 0.928 | 0.089 | 26.943 | **0.891** | **0.102** |\\n\\n**Q5: Questions 2) For extrapolation rendering according to lines L261-L269, it seems feasible to use either the second or third approach from question 1. Which method was actually used by the authors? If the third approach was used, how does it perform over longer extrapolation periods? Could the authors provide visual results for extrapolations that extend beyond the time span covered in the dataset?**\\n\\n**A5:** We use the third approach. Our primary goal is to extrapolate meaningful future frames as a continuum of the last training observations, which is achieved by the third approach (_i.e._, following Steps \\\\#1\\\\#2\\\\#3\\\\#4 in Section 3.3 lines 305-313), whereas the progressively accumulated parameters used in the second approach are less meaningful due to accumulated errors. \\n\\nAs requested, we further conduct experiments for much longer extrapolation. Particularly, in our main experiments, training period lasts from $t=0\\\\sim 0.75$ and extrapolation period lasts from $t=0.75\\\\sim 1.0$. Here we show the results till $t=1.5$, which is already twice the training period. \\n\\nAs shown in Figure 21 of Appendix A.15 in the revised paper, we provide qualitative results of longer extrapolation from the total four datasets. Note that, we are unable to provide quantitative results due to the lack of ground truth images. We can see that our method can still obtain physically meaningful future frame prediction in particularly high quality.\\n\\nIn the revised paper, we have clarified our extrapolation settings in lines 368-373 of Section 4.1, and added the new extrapolation results in Appendix A.15.\"}", "{\"summary\": \"The authors extend multi-view dynamical scene modeling by predicting motion physics parameters without additional supervision. Specifically, they directly predict a translation rotation dynamics system for each 3D particle, which gives the model capabilities in future predictions of trajectories and rigid part discovery via clustering. Quantitative and qualitative results show superior performance against prior arts on three existing and one proposed benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"[+] The paper is well-organized.\\n\\n[+] The proposed methodology of predicting translation rotation dynamics is straight-forward and well-presented.\\n\\n[+] The emerged behavior of rigid parts through motion clustering is interesting and show be highlighted further.\\n\\n[+] Extensive empirical evaluation on multiple benchmarks demonstrates superior performance, along with proper ablation study and demo video in supplementary.\", \"weaknesses\": [\"[-] My main concern about this work is the assumption made (L219) that \\\"there is no additional force involved after $t=0$.\\\" Although the author give a justification that \\\"a rolling ball suddenly exploding is not learnable,\\\" I am not sure if the scope of the research is sufficiently broad given this constraint:\", \"First, while some moveable objects cannot move of their own volition, many dynamical (interesting) objects do have the ability to move on their own (e.g. humans, vehicles, animals, etc). By assuming no additional forces after $t=0$, the formulation assumes the presence of no dynamical objects, which conflicts with some of the qualitative results (whale, skater and van). Are we simply modeling these objects in a time window where no force is applied? It would be great if the authors can clarify on how the assumption impacts the modeling of self-propelled objects.\", \"Second, due to the strict assumption made about applied forces, the dynamical scene valid for this method would be rather simple and cannot contain more complex motion with evolving accelerations. The authors should elaborate on the types of motion that can / cannot be handled by GVFi.\", \"Finally, since I do not work on this topic, I am not sure how significant is my concern above and I am happy to change my recommendation as I await to read other reviewer\\u2019s comments and the author's response to my review.\"], \"questions\": \"Please refer to weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Comment: Thank the authors for their valuable feedback and for addressing my questions. While my concerns have been resolved, I share the generalization and novelty concerns raised by reviewers 3rJ7, i7XQ, and j1SG. I will maintain my score as a borderline accept.**\\n\\n**Response:** Thank you very much for acknowledging that your concerns have been addressed, and maintaining your positive score. We would make further clarifications. \\n\\nRegarding our novelty, the reviewers hold their view on the ground that our method is built on DefGS[1] and thus lacks novelty. However, we would clarify the core differences and our novelty as follows: \\n- DefGS focuses on the problem of interpolation, while our method tackles a rather different problem of physics learning and future extrapolation. This means that our learning objectives (*i.e.*, the set of physical parameters) are fundamentally different from DefGS. \\n- DefGS is just our backbone network, not our contribution. In our rebuttal materials (Response A1 for you, Response A1 for **3rJ7**), we have clearly demonstrated that our method can adopt another backbone 4DGS[2]. This means that downplaying our novelty grounding on the used backbone is unfair. \\n- Lastly, our method clearly outperforms all baselines by large margins on 5 datasets for accurate future extrapolation and motion segmentation, showing the superiority of our method. \\n\\nRegarding the generalization or assumptions, reviewers ignore the fact that: \\n- First, we **never** assume motion without external forces, but with constant (or constantly changing in third order type) forces. Such examples include falling balls in gravity and all self-propelled objects in the datasets.\\n- Second, we **never** assume the object's motion is rigid. An object comprises numerous independent particles. A single particle's motion is rigid, but the resulting compounded object motion can be extremely complex. In the datasets, our method can exactly model many self-propelled deformable objects. \\n\\n\\n [1] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction. CVPR, 2024.\\n\\n [2] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.\"}", "{\"comment\": \"**Q4: Core concepts seem poorly motivated; physics priors are common, but why only a second order expansion? Is this really a reasonable assumption in practice? There needs to be more motivation to this choice and more careful analysis of its limitations.**\\n\\n**A4:** As requested, in the revised paper, we have rephrased the assumption, motivations and scopes of our method, particularly clarified in lines 235-254 of Section 3.2. \\n\\nIn addition, we also conduct two more groups of experiments to validate the effectiveness of our method on complex 3D scenes: 1) sliding window based incremental learning (refer to our response in A1), and 2) ablations of first-/third- order updating schemes in Equation 4 (refer to our response in below A7).\\n\\n**Q5: Figure 1 and 2 are almost the same thing but not very informative. A better figure would be demonstrating the taylor series expansion of a single gaussian's trajectory.**\\n\\n**A5:** This is a very helpful comment. In the revised paper, we have redrawn Figure 2, clearly illustrating the underlying trajectory governed by the learned physical parameters. \\n\\n**Q6: The math in section 3 does not feel like it was put there to be informative, but instead to intimidate the reader; after climbing through the notation its basically just saying to compose offsets together to estimate motion. If the authors feel this notational exercise is needed (don't think it is), it should go in the appendix and the main paper should have far more explanatory figures.**\\n\\n**A6:** Thanks for the suggestion. In the revised paper, in Sections 3.2\\\\&3.3, we have consolidated and simplified the core equations, whilst keeping the logic flow fluent and sufficient for a broader audience.\\n\\n**Q7: Ablations do not seem to address the core contribution, which is the assumption of the second order expansion --- what if you only do a first order expansion? Can you attempt to extend this to third order? They briefly mention replacing it with an MLP, but minimal details are provided.**\\n\\n**A7:** This is a very insightful comment. As requested, we further conduct ablation experiments for choosing first-/third- order relationships in our Equation 4 on Dynamic Object Dataset and Dynamic Multipart Dataset. The following Table (Table 8 in revised paper) shows the results. We can see that, in Dynamic Object Dataset which has several self-propelled objects whose internal forces tend to change over time, not surprisingly, the third-order variant performs better. Nevertheless, due to the inherent over-parametrization, the third-order scheme tends to learn excessive rotation information to represent simple acceleration motions, thus incurring inferior performance on the Dynamic Multipart Dataset which does not have self-propelled objects. Overall, it is indeed interesting yet non-trivial to learn much higher-order relationships and we leave it for future exploration.\\n\\nRegarding our ablation of ``replacing it with an MLP\\\", we originally aim to keep the physics parameters changing over time, thus making it more complex in theory. Particularly, we use the same network architecture of $f_{trd}$, except changing the input from $f_{trd}(\\\\boldsymbol{x})$ to $f_{trd}(\\\\boldsymbol{x}, t)$, to force the change of physics parameters. Nevertheless, its performance is inferior due to the lack of physics consistency over time, as detailed in Appendix A.7. \\n\\nIn the revised paper, we have added the new first-/third- order ablations in Table 8 of Appendix A.8. \\n\\n**Table:** *Quantitative results of ablation studies about 3 orders of Taylor expansion on Dynamic Multipart dataset and Dynamic Object Dataset.*\\n| | **Dynamic Multipart Dataset** | | | | | | **Dynamic Object Dataset** | | | | | |\\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| $1^{st}$-order | 34.776 | 0.990 | 0.013 | 26.729 | 0.976 | 0.018 | 38.892 | **0.995** | **0.005** | 28.536 | **0.983** | 0.012 |\\n| $2^{nd}$-order | 34.807 | **0.991** | **0.011** | **30.721** | **0.986** | **0.012** | 38.788 | **0.995** | 0.006 | 28.758 | 0.982 | **0.011** |\\n| $3^{rd}$-order | **35.268** | **0.991** | 0.012 | 30.503 | 0.985 | 0.013 | **39.164** | **0.995** | **0.005** | **29.378** | **0.983** | **0.011** |\"}", "{\"comment\": \"We appreciate the reviewer's valuable comments and address the concerns below.\\n\\n**Q1: The contributions of this work are somewhat incremental, as most of the methodological design heavily overlaps with the baseline method, Deformable3DGS [1]. The key difference lies in the incorporation of dynamical principles, primarily to enable extrapolation capabilities rather than introducing fundamentally novel approaches.**\\n\\n**A1:** For clarification, our core novelty is the introduced translation rotation dynamics system together with its effective optimization strategy, which allows us to truly learn physical parameters, ultimately achieving future frame extrapolation. By comparison, existing works such as DefGS/4DGS all fail to do so, fundamentally because they do do not learn underlying physics priors, though they perform well for past frame interpolation, as extensively verified in Tables 1\\\\&2 in our paper. \\n\\nIn addition, the use of DefGS as our auxiliary deformation field is actually not our novelty. In fact, our introduced translation rotation dynamics system is also amenable to other deformation fields such as 4DGS, achieving satisfactory performance as shown in the following Table 1 (hyperparameters not tuned due to limited time for rebuttal). \\n\\nTo the best of our knowledge, we are the first to learn such a translation rotation dynamics system for modeling dynamic 3D scenes in literature, and we achieve state-of-the-art performance for future frame extrapolation on five datasets. This clearly demonstrates our significant novelty in the field of study. \\n\\nIn the revised paper, we highlight our novelty in lines 93-99 of Section 1.\\n\\n**Table 1:** _Quantitative results of our method with 4DGS as the auxiliary deformation field on four datasets._\\n| | **Dynamic Multipart Dataset** | | | | | | **Dynamic Object Dataset** | | | | | |\\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| GVFi$_{4dgs}$ | **36.542** | **0.991** | 0.015 | **30.801** | _0.983_ | _0.016_ | 35.961 | 0.985 | 0.021 | _28.316_ | _0.978_ | _0.023_ |\\n| GVFi | 34.807 | **0.991** | **0.011** | _30.721_ | **0.986** | **0.012** | **38.788** | **0.995** | **0.006** | **28.758** | **0.982** | **0.011** |\\n| | **Dynamic Indoor Scene Dataset** | | | | | | **NVIDIA Dynamic Scenes Dataset** | | | | | |\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| GVFi$_{4dgs}$ | 27.932 | 0.860 | 0.252 | _31.590_ | _0.909_ | _0.194_ | 18.995 | 0.448 | 0.544 | _22.706_ | _0.714_ | _0.400_ |\\n| GVFi | **32.202** | **0.928** | **0.089** | **34.556** | **0.964** | **0.046** | **26.943** | **0.891** | **0.102** | **29.388** | **0.938** | **0.067** |\\n\\n**Q2: The proposed motion modeling framework is overly restrictive, relying on an strong assumption of no external forces, disregarding energy transfer processes, and lacking the ability to handle non-rigid or nonlinear motion. These limitations significantly reduce the model\\u2019s applicability to real-world physics.**\\n\\n**A2:** Thank you for pointing out the inaccurate descriptions about our assumption in the original paper. \\n\\nFor clarification, our scheme in Equation 4 follows a second-order relationship to update dynamics parameters for each rigid particle from time $0$ to $t$. It captures up to a constant acceleration from $0$ to $t$, meaning that forces are indeed allowed to generate accelerations and transfer energy.\\n\\nTheoretically, our updating scheme in Equation 4 can be easily extended to higher orders to capture extremely complex dynamics such as self-propelled objects. In addition, as suggested by the reviewer **j1SG**, a simple sliding window based approach can be applied to continuously and incrementally predict future frames given the newest visual observations from sensors, such that the complex dynamics can be well-captured.\\n\\nMore experiment results for learning complex dynamics are provided in the following response A3. In the revised paper, we have rephrased the descriptions about our assumption and scope in lines 235-254 of Section 3.2.\"}", "{\"comment\": \"Thank you for your detailed responses to my comments as well as those of the other reviewers, and for your efforts to improve the manuscript. I have carefully reviewed your replies, the revised paper, as well as the comments and feedback from the other reviewers. After thorough consideration, I have identified some critical issues that significantly limit the generality and originality of the proposed method. Consequently, I have decided to adjust my score from 5 to 3. Below, I outline the main reasons for this decision:\\n\\n---\\n\\n### 1. Limitations of the Core Assumptions\\n\\nThis work relies on a strong assumption that the motion occurs without external forces and that the objects in motion are rigid. While this assumption has led to favorable performance in the reported experiments, particularly in the extrapolation mode, I believe that the performance advantage is largely due to the characteristics of the chosen dataset rather than the generalizability of the method.\\n\\nNonetheless, the actual results reveal significant limitations that question the validity of this assumption. For instance, in Figure 13 of the revised paper, while the overall predictions for the skateboarder\\u2019s body are relatively accurate, closer examination reveals that the skateboarder\\u2019s hand undergoes noticeable deformations during motion, which clearly violates the rigidity assumption. This results in poor predictions for finer details and highlights the method\\u2019s inability to handle non-rigid components effectively in real-world scenarios. Furthermore, in the extrapolation mode, the skateboard itself is not reconstructed, which is a critical failure given its integral role in the motion context. These results cast doubt on the method\\u2019s robustness and its applicability beyond datasets that closely conform to the rigid-body assumption.\\n\\n--- \\n\\n### 2. Applicability of the Method and Alternative Approaches\\n\\nI remain skeptical about the practicality of extrapolation based on such a strong assumption. In scenarios where this assumption holds true, simpler reconstruction-based editing approaches may achieve similar or even better outcomes. For example, after reconstruction, it is relatively straightforward to calculate motion properties such as velocity and momentum, which can then estimate the approximate future positions of objects.\\n\\nFrom the visual results provided by the authors (again, referring to Figure 13), the overall visual quality of the extrapolated objects is very poor, even when compared to what could potentially be achieved using the simpler editing method mentioned above. This raises serious doubts about the significance and practical utility of the extrapolation mode, and I strongly question whether it is meaningful in its current form.\\n\\n---\\n\\n### 3. Concerns Shared by Other Reviewers\\n\\nIn addition to my own comments, I note that other reviewers have raised similar concerns. These include:\\n\\n- Limited Contribution: For example, Reviewer gAFC mentioned that \\u201cthe novelty of this addition may be somewhat limited.\\u201d\\n- Rationale for Strong Assumptions: Reviewer j1SG stated, \\u201cMy main concern about this work is the assumption made.\\u201d\\n- Experimental Design and Dataset Choices: Several reviewers, including myself, have highlighted that the dataset and experimental settings may not adequately validate the method\\u2019s applicability to more general or realistic scenarios.\\n\\nThese consistent concerns suggest a broader consensus that the work, in its current state, does not sufficiently address its limitations or justify its assumptions.\\n\\n--- \\n\\nBased on these considerations, I believe the current work lacks sufficient evidence to demonstrate its generality, practicality, or originality. While the direction of the study has potential, significant improvements are necessary to strengthen its contributions. I hope these comments are helpful for your revisions, and I remain open to further discussion if needed.\"}", "{\"comment\": \"We appreciate the reviewer's valuable comments and address the concerns below.\\n\\n**Q1: This model builds upon DefGS (Yang et al., 2024), with its main contribution being the translation-rotation dynamics system module. However, the novelty of this addition may be somewhat limited.**\\n\\n**A1:** For clarification, our core novelty is the introduced translation rotation dynamics system together with its effective optimization strategy, which allows us to truly learn physical parameters, ultimately achieving future frame extrapolation. By comparison, existing works such as DefGS/4DGS[1] all fail to do so, fundamentally because they do not learn underlying physics priors, though they perform well for past frame interpolation, as extensively verified in Tables 1\\\\&2 in our paper. \\n\\nIn addition, the use of DefGS as our auxiliary deformation field is actually not our novelty. In fact, our introduced translation rotation dynamics system is also amenable to other deformation fields such as 4DGS[1], achieving satisfactory performance as shown in the following table (hyperparameters not tuned due to the limited time for rebuttal). \\n\\nTo the best of our knowledge, we are the first to learn such a translation rotation dynamics system for modeling dynamic 3D scenes in literature, and we achieve state-of-the-art performance for future frame extrapolation on five datasets. This clearly demonstrates our significant novelty in the field of study. \\n\\nIn the revised paper, we highlight our novelty in lines 93-99 of Section 1.\\n\\n**Table:** _Quantitative results of our method with 4DGS as the auxiliary deformation field on four datasets._\\n| | **Dynamic Multipart Dataset** | | | | | | **Dynamic Object Dataset** | | | | | |\\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| GVFi$_{4dgs}$ | **36.542** | **0.991** | 0.015 | **30.801** | _0.983_ | _0.016_ | 35.961 | 0.985 | 0.021 | _28.316_ | _0.978_ | _0.023_ |\\n| GVFi | 34.807 | **0.991** | **0.011** | _30.721_ | **0.986** | **0.012** | **38.788** | **0.995** | **0.006** | **28.758** | **0.982** | **0.011** |\\n| | **Dynamic Indoor Scene Dataset** | | | | | | **NVIDIA Dynamic Scenes Dataset** | | | | | |\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| GVFi$_{4dgs}$ | 27.932 | 0.860 | 0.252 | _31.590_ | _0.909_ | _0.194_ | 18.995 | 0.448 | 0.544 | _22.706_ | _0.714_ | _0.400_ |\\n| GVFi | **32.202** | **0.928** | **0.089** | **34.556** | **0.964** | **0.046** | **26.943** | **0.891** | **0.102** | **29.388** | **0.938** | **0.067** |\\n\\n**Q2: the performance of DefGS (Yang et al., 2024) and GVFi is quite similar, and there appears to be no significant visual difference between the outputs of the two models. Could the authors clarify specific scenarios where the translation-rotation dynamics system module leads to performance improvements?**\\n\\n**A2:** As shown in Tables 1\\\\&2 in our main paper, the performance of DefGS actually lags far behind our method for future frame extrapolation on all datasets. Particularly, our method has 10 points higher on PSNR than DefGS on three datasets (Dynamic Object / Dynamic Indoor Scene / Dynamic Multipart), and 5 points higher on PSNR on two datasets (NVIDIA Dynamic Scene, and our newly collected real-world GoPro dataset). \\n\\nAs discussed in above A1, DefGS fundamentally cannot predict future frames because it does not learn physics priors, though it can achieve good performance for past frame interpolation.\\n\\n[1] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.\"}", "{\"title\": \"Thanks\", \"comment\": \"Dear reviewer i7XQ,\\n\\nThank you very much for your valuable time and positive rating on our paper. Your insightful suggestions have greatly helped us in enhancing the manuscript.\\n\\nYes, with our new rebuttal materials at hand, we are committed to further improving the paper. Specifically, we will further clarify our neat core concept and techniques as you suggested, ensuring a more comprehensive yet concise presentation.\\n\\nBest,\\nAuthors\"}", "{\"comment\": \"**Comment \\\\#1: On the Physical Assumptions**\\n\\n**You stated that \\u201cwe never assume motion without external forces.\\u201d However, this is inconsistent with the original manuscript (Lines 219\\u2013221), where it is explicitly stated: \\u201cHere we make an assumption that there is no additional force involved after t = 0.\\u201d While this statement has been removed in the revised version, and you clarified that the assumption is now one of \\u201cconstant force\\u201d rather than \\u201cno additional force,\\u201d it suggests that the method\\u2019s motivation and foundation are still fundamentally rooted in this assumption.**\\n\\n**While I acknowledge your clarification that the method assumes rigidity at the level of the Gaussian primitives rather than the entire object. However, from a physical perspective, if the components of an object are rigidly connected and subjected to a constant force, the overall motion is typically expected to be rigid as well. This is consistent with your experimental results, where the method performs well under scenarios of overall rigid motion but struggles significantly in cases of non-rigid motion, such as those illustrated in Figure 13 (a scenario that is notably different from other dataset in the paper and more representative of real-world conditions). This suggests that while the method explicitly models local rigidity at the component level, its practical effectiveness appears more aligned with an implicit assumption of overall rigidity.**\\n\\n**If the goal of the method is to generalize beyond rigid motion, the reliance on this implicit assumption under constant force significantly limits its applicability to more complex, real-world motion scenarios.**\\n\\n**Response \\\\#1:** Thank you for sharing your points, and we would make the following clarifications as brief as possible to save your precious time. \\n\\nRegarding the assumption, it appears that the core issue is caused by different interpretations of the language term ``no additional force\\\" used before, but it's now removed to avoid confusion in our new version. After two rounds of discussion, we hope to reach a consensus that, as shown in Equation (4), our basic physical assumption is up to a second-order relationship, and it can be easily extended to a third-order relationship (Response A3 to **j1SG**, Response A3 to you, Response A7 to **i7XQ**). This clearly means that our assumption is **constant (or constantly changing in third order type) forces**. \\n\\nRegarding the local rigidity, by treating each particle separately, our method does not apply an implicit assumption of overall rigidity, because we do not assume that neighboring particles should predict the same physical parameters in our design, unless the network learns to achieve so just from training images by itself. And it can achieve so, as verified on scenarios of rigid motions acknowledged by the reviewer. In addition to rigid motions, as demonstrated on many self-propelled deformable objects, our method can indeed effectively learn non-rigid motions, thanks to our neat per-particle physical formulation. \\n\\nRegarding the particularly challenging scenario of Figure 13, we acknowledge that all methods achieve less satisfactory results, though ours is still better. For any newly designed algorithm, failure cases are always unavoidable, which typically moves the ball forward and inspires more effective future work. We hope the reviewer could reweight the results of Figure 13.\"}", "{\"comment\": \"Regarding your emphasis on the experimental comparisons in Figure 13, I have two additional suggestions. First, including visualizations of the boundary frames between interpolation and extrapolation could better illustrate the complexity of extrapolation and help readers understand the challenges in transitioning from known to unknown motion. Second, your emphasis on 4DGS highlights some unexpected results: the significant degradation in both the reconstructed static elements (e.g., the building and ground) and the dynamic components (e.g., the skateboarder and the skateboard) seems inconsistent with 4DGS\\u2019s typical performance. This raises questions about whether these results are influenced by experimental settings or implementation details, and further clarification on this point would be helpful.\"}", "{\"comment\": \"**Comment \\\\#3: Concerns Shared by Other Reviewers -**\\n\\n**In addition to my own comments, I note that other reviewers have raised similar concerns. These include:**\\n\\n- **Limited Contribution: For example, Reviewer gAFC mentioned that \\u201cthe novelty of this addition may be somewhat limited.\\u201d**\\n\\n- **Rationale for Strong Assumptions: Reviewer j1SG stated, \\u201cMy main concern about this work is the assumption made.\\u201d**\\n\\n- **Experimental Design and Dataset Choices: Several reviewers, including myself, have highlighted that the dataset and experimental settings may not adequately validate the method\\u2019s applicability to more general or realistic scenarios.**\\n\\n**These consistent concerns suggest a broader consensus that the work, in its current state, does not sufficiently address its limitations or justify its assumptions.**\\n\\n**Response \\\\#3:** Regarding the novelty, the reviewer **gAFC** holds the view on the ground that our method is built on DefGS[1] and thus lacks novelty. However, we would clarify the core differences and our novelty as follows: \\n\\n- DefGS focuses on the problem of interpolation, while our method tackles a rather different problem of physics learning and future extrapolation. This means that our learning objectives (*i.e.*, the set of physical parameters) are fundamentally different from DefGS. \\n- DefGS is just our backbone network, not our contribution. In our rebuttal materials (Response A1 for **gAFC**, Response A1 for you), we have clearly demonstrated that our method can adopt another backbone 4DGS[2]. This means that downplaying our novelty grounding on the used backbone is unfair. \\n- Lastly, our method clearly outperforms all baselines by large margins on 5 datasets for accurate future extrapolation and motion segmentation, showing the superiority of our method. \\n\\nRegarding the assumption, please refer to our above Response \\\\#1. \\n\\nRegarding the experiments and datasets, we exactly follow the established experimental settings on public or our newly collected datasets in the community, conducting an extensive and adequate assessment on five datasets, ultimately achieving the state-of-the-art performance for future frame extrapolation on various general and realistic 3D dynamic scenes. \\n\\nOverall, we respect the reviewer's opinions, but an unbiased judgment which fully takes into account the current literature of 3D physics learning should be more beneficial to the field of study. \\n\\n\\n[1] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction. CVPR, 2024.\\n\\n[2] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.\"}", "{\"comment\": \"**Q3: There are no quantitative results for object segmentation. Would it be possible to evaluate this and compare it to models that rely on human annotations?**\\n\\n**A3:** Thanks for this valuable suggestion. As requested, we include extensive quantitative results on the Dynamic Indoor Scene dataset in Table 3 of Section 4.2 in our revised paper.\\n\\nIn particular, we follow Gaussian Grouping [2] to render 2D object segmentation masks for all 30 views over 60 timestamps on all 4 scenes, _i.e._, 7200 images in total. We compare with **D-NeRF**, **NVFi**, **DefGS** and **DefGS$_{nvfi}$**. We follow NVFi to obtain segmentation results of D-NeRF and NVFi. For the 3DGS-based baselines, we also adopt OGC [3] to segment Gaussians based on scene flows induced from their learned deformation fields. All implementation details are in Appendix. Additionally, we include a strong image-based 2D object segmentation method, Mask2Former [4] pre-trained by human annotations on COCO dataset [5] as a fully-supervised baseline.\\n\\nAs shown in the following Table (Table 3 in revised paper), our method achieves almost perfect object segmentation results on all metrics, significantly outperforming all baselines. This shows that our learned physical parameters correctly model object physical motion patterns and can be easily leveraged to identify individual objects according to their motions, without needing any human annotations.\\n\\n**Table:** _Quantitative results of motion segmentation results on Dynamic Indoor Scene dataset._\\n| | AP$\\\\uparrow$ | PQ$\\\\uparrow$ | F1$\\\\uparrow$ | Pre$\\\\uparrow$ | Rec$\\\\uparrow$ | mIoU$\\\\uparrow$ |\\n|---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| Mask2Former [4] | 65.37 | 73.14 | 78.29 | _94.83_ | 68.88 | 64.42 |\\n| D-NeRF | 57.26 | 46.15 | 59.02 | 56.55 | 62.94 | 46.58 |\\n| NVFi | _91.21_ | _78.74_ | _93.75_ | 93.76 | _93.74_ | _67.64_ |\\n| DefGS | 51.73 | 57.60 | 66.43 | 63.21 | 70.07 | 54.46 |\\n| DefGS$_{nvfi}$ | 55.26 | 62.75 | 69.83 | 69.39 | 72.91 | 56.82 |\\n| GVFi (Ours) | **95.82** | **93.28** | **97.90** | **96.21** | **99.86** | **79.55** |\\n\\n**Q4: Questions 1) The performance and visual results of DefGS and GVFi appear very similar. Could the authors specify scenarios where the translation-rotation dynamics module offers clear advantages?**\\n\\n**A4:** Refer to A1 and A2.\\n\\n**Q5: Questions 2) Could quantitative results for object segmentation be provided, and how does GVFi compare to models that rely on human annotations for this task?**\\n\\n**A5:** Refer to A3.\\n\\n**Q6: Questions 3) Could the authors highlight the novelty compare to DefGS?**\\n\\n**A6:** Refer to A1.\\n\\n[2] Mingqiao Ye, Martin Danelljan, Fisher Yu, and Lei Ke. Gaussian grouping: Segment and edit anything in 3d scenes. In ECCV, 2024.\\n\\n[3] Ziyang Song and Bo Yang. OGC: Unsupervised 3D Object Segmentation from Rigid Dynamics of Point Clouds. NeurIPS, 2022\\n\\n[4] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention Mask Transformer for Universal Image Segmentation. CVPR, 2022\\n\\n[5] Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Doll\\u00b4ar. Microsoft COCO: Common Objects in Context. ECCV, 2014.\"}", "{\"comment\": \"**Q8: Questions 5) In the ablation study, the authors provide a rationale for their choice of $\\\\delta t$, which is somewhat reasonable. However, this conclusion is based on results from only one dataset, which may not be sufficient, as each dataset could exhibit different motion characteristics. Could the authors clarify how to select an appropriate $\\\\delta t$ in practice across diverse datasets?**\\n\\n**A8:** Thanks for this insightful comment and advice. As requested, we further conduct extensive ablations about different choices of $\\\\Delta t$ on all 4 datasets, and the results are listed in the following Table 7 (Table 7 in revised paper). We observe that $3\\\\delta t$ works better in extrapolation on three datasets (Dynamic Object/ Dynamic Indoor Scene/ NVIDIA Dynamic Scenes). The basic rule to select an appropriate $\\\\delta t$ is based on the motion range. If the motion changes fast, so the motion between two consecutive frames is apparent enough, then a smaller $\\\\delta t$ is good enough. Otherwise, if the motion is rather slow, then a larger $\\\\delta t$ is preferred.\\n\\nIn the revised paper, we have added these new results in Table 7 of Appendix A.8.\\n\\n**Table 7:** Quantitative results of ablation studies for $\\\\delta t$ on all four datasets.\\n| | **Dynamic Multipart Dataset** | | | | | | **Dynamic Object Dataset** | | | | | |\\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| $\\\\delta t$ | 35.128 | **0.991** | **0.011** | 29.441 | 0.984 | 0.013 | **38.929** | **0.995** | **0.005** | 28.506 | 0.981 | 0.013 |\\n| $2\\\\delta t$ | 34.807 | **0.991** | **0.011** | **30.721** | **0.986** | **0.012** | 38.788 | **0.995** | 0.006 | 28.758 | 0.982 | **0.011** |\\n| $3\\\\delta t$ | **35.223** | **0.991** | **0.011** | 30.246 | 0.985 | **0.012** | 38.693 | **0.995** | 0.006 | **29.414** | **0.983** | 0.012 |\\n| | **Dynamic Indoor Scene Dataset** | | | | | | **NVIDIA Dynamic Scenes Dataset** | | | | | |\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| $\\\\delta t$ | 32.179 | **0.929** | **0.089** | 34.387 | 0.964 | 0.046 | 26.823 | **0.891** | **0.101** | 28.781 | 0.934 | 0.070 |\\n| $2\\\\delta t$ | 32.202 | 0.928 | **0.089** | 34.556 | 0.964 | 0.046 | 26.943 | **0.891** | 0.102 | 29.388 | **0.938** | **0.067** |\\n| $3\\\\delta t$ | **32.296** | 0.928 | **0.089** | **35.242** | **0.967** | **0.045** | **27.099** | 0.890 | 0.103 | **29.440** | **0.938** | **0.067** |\\n\\n**Q9: Questions 6) The experimental details are insufficient, particularly regarding training time, required resources, storage size, and rendering speed. Could the authors provide more comprehensive information on these aspects?**\\n\\n**A9:** As requested, we have added comprehensive details in Appendix A.4 in the revised paper. Particularly:\\n\\nAs the complexity of different scenes varies, the total number of Gaussians learned for each scene varies from 40k to 1.6M. In general, our training time is 1.05 times longer than DefGS (or 4DGS if built on it). For example, on the _bat_ of Dynamic Object Dataset, DefGS/4DGS need 25 minutes, while we need 27 minutes, with a slight training cost addition. Since our additional module is a tiny MLPs, we only need 367.4kB larger storage. Our rendering speed is 0.85 times slower than DefGS (or 0.8 times slower than 4DGS if built on it). For example, on the _bat_ of Dynamic Object Dataset, they achieve 40fps and ours 32fps. We train all our models on a single NVIDIA 3090 24G GPU.\\n\\n**Q10: Questions 7) Please ensure that all abbreviations and technical terms are clearly defined, with full explanations and necessary citations. In the related work section, it would be helpful to explicitly clarify the differences from relevant works wherever possible.**\\n\\n**A10:** In the revised paper, all abbreviations and terms are clearly defined. In lines 132-134 and 146-149, we have clarified the differences from related works.\"}", "{\"summary\": \"This paper introduces GVFi, a novel approach for modeling 3D scene geometry, appearance, and dynamics from multi-view images without the need for human annotations, such as bounding boxes or segmentations. The authors highlight that previous 3D Gaussian Splatting models struggled to capture the underlying motion physics of dynamic scenes. In contrast, GVFi treats 3D points as particles in space, each with a learnable size and orientation, enabling the model to learn particle rotation and translation to represent a dynamic system effectively. Experimental results on three diverse datasets show that GVFi significantly outperforms prior 3D Gaussian Splatting models on both interpolation and extrapolation tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. It is novel to represent the 3D points as particles, which is a well-established concept in robotics. This representation could open up further research topics to improve dynamics modeling.\\n2. This model does not rely on human annotations for motion estimation. It can autonomously group meaningful objects based on motion patterns without requiring any labeled data.\\n3. The authors provide both quantitative and qualitative results across multiple datasets, demonstrating GVFi\\u2019s improvements in both interpolation and extrapolation tasks.\", \"weaknesses\": \"1. This model builds upon DefGS (Yang et al., 2024), with its main contribution being the translation-rotation dynamics system module. However, the novelty of this addition may be somewhat limited.\\n2. The performance of DefGS (Yang et al., 2024) and GVFi is quite similar, and there appears to be no significant visual difference between the outputs of the two models. Could the authors clarify specific scenarios where the translation-rotation dynamics system module leads to performance improvements?\\n3. There are no quantitative results for object segmentation. Would it be possible to evaluate this and compare it to models that rely on human annotations?\", \"questions\": \"1. The performance and visual results of DefGS and GVFi appear very similar. Could the authors specify scenarios where the translation-rotation dynamics module offers clear advantages?\\n2. Could quantitative results for object segmentation be provided, and how does GVFi compare to models that rely on human annotations for this task?\\n3. Could the authors highlight the novelty compare to DefGS?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q3: Second, due to the strict assumption made about applied forces, the dynamical scene valid for this method would be rather simple and cannot contain more complex motion with evolving accelerations. The authors should elaborate on the types of motion that can \\\\/ cannot be handled by GVFi.**\\n\\n**A3:** Thanks for this helpful suggestion. To validate the effectiveness of our method on more complex motions with evolving accelerations, as also requested by the reviewer **i7XQ**, we further conduct ablation experiments for choosing first-/third- order relationships in our Equation 4 on Dynamic Object Dataset and Dynamic Multipart Dataset. \\n\\nThe following Table (Table 8 in revised paper) shows the results. We can see that, in Dynamic Object Dataset which has several self-propelled objects whose internal forces tend to change over time, not surprisingly, the third-order variant performs better. Nevertheless, due to the inherent over-parametrization, the third-order scheme tends to learn excessive rotation information to represent simple acceleration motions, thus incurring inferior performance on the Dynamic Multipart Dataset which does not have self-propelled objects.\\n\\nOverall, it is indeed interesting yet non-trivial to learn much higher-order relationships and we leave it for future exploration.\\n\\nIn the revised paper, we have clarified the descriptions in lines 235-254 of Section 3.2, and added the new first-/third- order ablations in Table 8 of Appendix A.8. \\n\\n**Table:** *Quantitative results of ablation studies about 3 orders of Taylor expansion on Dynamic Multipart dataset and Dynamic Object Dataset.*\\n| | **Dynamic Multipart Dataset** | | | | | | **Dynamic Object Dataset** | | | | | |\\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | | Interpolation | | | Extrapolation | | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| $1^{st}$-order | 34.776 | 0.990 | 0.013 | 26.729 | 0.976 | 0.018 | 38.892 | **0.995** | **0.005** | 28.536 | **0.983** | 0.012 |\\n| $2^{nd}$-order | 34.807 | **0.991** | **0.011** | **30.721** | **0.986** | **0.012** | 38.788 | **0.995** | 0.006 | 28.758 | 0.982 | **0.011** |\\n| $3^{rd}$-order | **35.268** | **0.991** | 0.012 | 30.503 | 0.985 | 0.013 | **39.164** | **0.995** | **0.005** | **29.378** | **0.983** | **0.011** |\\n\\n**Q4: Finally, since I do not work on this topic, I am not sure how significant is my concern above and I am happy to change my recommendation as I await to read other reviewer\\u2019s comments and the author's response to my review.**\\n\\n**A4:** Thank you for your willingness to discuss. Your concerns are very insightful and have significantly helped us to improve the quality of our revised paper. To sum up, in the revised paper, we have clarified the motivations of choosing a second-order updating scheme and the scopes of our method. Most notably, we have further conducted experiments about the suggested incremental experiments, and the ablation of first-/third- order updating schemes.\"}", "{\"comment\": \"**Comment \\\\#4: Regarding your emphasis on the experimental comparisons in Figure 13, I have two additional suggestions. First, including visualizations of the boundary frames between interpolation and extrapolation could better illustrate the complexity of extrapolation and help readers understand the challenges in transitioning from known to unknown motion. Second, your emphasis on 4DGS highlights some unexpected results: the significant degradation in both the reconstructed static elements (e.g., the building and ground) and the dynamic components (e.g., the skateboarder and the skateboard) seems inconsistent with 4DGS\\u2019s typical performance. This raises questions about whether these results are influenced by experimental settings or implementation details, and further clarification on this point would be helpful.**\\n\\n**Response \\\\#4:** Thank you very much for the two detailed suggestions. \\n\\nAs to the first suggestion, sure, we will add additional visualizations of adjacent frames between interpolation and extrapolation in the next version. We agree that this will clearly highlight the difficulty of extrapolation. \\n\\nRegarding your second comment about the implementation details of 4DGS, we would make the following clarifications:\\n\\n- As shown in Table 2 of our paper, our implementation of 4DGS shows the best performance for novel-view synthesis on the Dynamic Multipart dataset and achieves comparable performances on other datasets as well. Therefore, we are sure there is no issue in our implementation. \\n- In particular, we follow the official configuration of the deformation HexPlane for the real-world dataset, _i.e._, with a resolution of $64\\\\times64\\\\times64\\\\times150$. Then, we train it for 60000 iterations, where the coarse iteration is set as 3000 and we keep densifying the Gaussians till iteration 15000 (this setting exactly follows the official setting). For reference, we train DefGS for 40000 iterations, where the coarse iteration is also 3000 and the identifying iteration is also 15000. All canonical Gaussians are initialized by SfM, which is consistent for all baselines. \\n\\nOriginally, we found the official settings for both DefGS and 4DGS fail to reconstruct the challenging skater scene, primarily due to the very violent dynamics and relatively small size of the skater. To tackle this issue, we turn to a gradual feeding strategy for all Gaussian-based models. To be specific, we feed $0\\\\sim0.1$ (virtual) seconds for the first 1000 iterations of dynamic training, and gradually add the succeeding timestamps into the training set, _i.e._, $0\\\\sim0.2$ for 1000 to 2000 iterations, $0\\\\sim0.3$ for 2000 to 3000 iterations, and keep doing so till all training samples are added into the training set. Only in this way, DefGS can achieve successful reconstruction, but 4DGS still fails either with or without this gradual feeding strategy, primarily because:\\n\\n- Since 4DGS uses the HexPlane representation as a deformation field, the grid features are only trained locally (when a Gaussian appears at the grid), making the training signals unstable at earlier iterations for large motions. This will introduce artifacts and influence the reconstruction quality, resulting in unexpected distortions, especially for the novel view interpolation and extrapolation (the visualization in Figure 13 is from a novel view). \\n- 4DGS itself struggles in learning relatively thin parts with large motions, which is also analyzed in the original paper: the performance of 4DGS dramatically falls for Hellwarrior scene compared to other scenes, whereas its concurrent work DefGS achieves much higher performance on that scene. Therefore, the skater scene which has thin parts poses challenges to 4DGS. \\n\\nOverall, we appreciate the reviewer's two suggestions and our concrete implementation details show that the skater scene is truly hard for 4DGS, and actually for all other baselines as well.\"}", "{\"comment\": \"We appreciate the reviewer's valuable comments and address the concerns below.\\n\\n**Q1: My main concern about this work is the assumption made (L219) that \\\"there is no additional force involved after $t=0$.\\\" Although the author give a justification that \\\"a rolling ball suddenly exploding is not learnable,\\\" I am not sure if the scope of the research is sufficiently broad given this constraint.**\\n\\n**A1:** Thank you for pointing out the inaccurate descriptions about our assumption in the original paper. \\n\\nFor clarification, our scheme in Equation 4 follows a second-order relationship to update dynamics parameters for each rigid particle from time $0$ to $t$. It captures up to a constant acceleration from $0$ to $t$, meaning that forces are indeed allowed to generate accelerations. More explanations are in the following response A2.\\n\\nIn the revised paper, we have rephrased the descriptions in lines 235-254 of Section 3.2.\\n\\n**Q2: First, while some moveable objects cannot move of their own volition, many dynamical (interesting) objects do have the ability to move on their own (e.g. humans, vehicles, animals, etc). By assuming no additional forces after $t=0$, the formulation assumes the presence of no dynamical objects, which conflicts with some of the qualitative results (whale, skater and van). Are we simply modeling these objects in a time window where no force is applied? It would be great if the authors can clarify on how the assumption impacts the modeling of self-propelled objects.**\\n\\n**A2:** We appreciate this thought-provoking comment. Theoretically, our updating scheme in Equation 4 can be naturally extended to higher orders or reduced to lower orders with regard to future time $t$. Intuitively, a higher order relationship from time $0$ to $t$ is expected to capture extremely complex dynamics such as self-propelled objects. \\n\\nIn the paper, one reason for choosing the second-order scheme to update dynamics parameters is that: In many applications such as robot manipulation, the need for future prediction typically involves a relatively short interval, *i.e.*, $|t-0|$ is rather small, *e.g.*, in milliseconds. In this case, a second-order relationship is usually sufficient to achieve decent approximations. \\n\\nIn addition, as suggested by the reviewer, a simple sliding window based approach can be applied to continuously and incrementally predict future frames given the newest visual observations from sensors, such that the dynamics of self-propelled objects can be well-captured.\\n\\nTo validate this, we conduct experiments for incremental learning on three self-propelled objects from the Dynamic Object Dataset. To be specific, we first feed time $t=0\\\\sim 0.15$ to train the network, and evaluate novel view interpolation on $t=0\\\\sim 0.15$, future frame extrapolation on $t=0.15\\\\sim 0.30$. Next, we include $t=0.15\\\\sim 0.30$ to train, and evaluate novel view interpolation on $t=0\\\\sim 0.30$, future frame extrapolation on $t=0.30\\\\sim 0.45$. We keep adding a time interval of 0.15 till we train from $t=0\\\\sim 0.75$, and extrapolate from $t=0.75\\\\sim 0.9$. \\n\\nThe following Table (Table 5 in revised paper) shows quantitative results. It can be seen that DefGS suffers from overfitting the previous timestamps and its interpolation performance decreases, while our model can stably adapt to new observations and achieve excellent past and future frame predictions. This means that even though the internal forces are changing for self-propelled objects, our model can easily adapt to new observations. \\n\\nMore explanations are in the following response A3. Details of the new incremental experiments are in Appendix A.5 in the revised paper.\\n\\n**Table:** *Quantitative results (PSNR) of incremental learning.*\\n| Interpolation | $0.15\\\\rightarrow0.30$ | $0.30\\\\rightarrow0.45$ | $0.45\\\\rightarrow0.60$ | $0.60\\\\rightarrow0.75$ | $0.75\\\\rightarrow0.90$ | Average |\\n|---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| DefGS | 39.386 | 38.745 | 35.818 | 34.531 | 27.904 | 35.277 |\\n| **GVFi (Ours)** | 40.032 | 40.706 | 41.013 | 40.466 | 39.971 | 40.438 |\\n\\n| Extrapolation | $0.15\\\\rightarrow0.30$ | $0.30\\\\rightarrow0.45$ | $0.45\\\\rightarrow0.60$ | $0.60\\\\rightarrow0.75$ | $0.75\\\\rightarrow0.90$ | Average |\\n|---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| DefGS | 23.438 | 21.360 | 19.989 | 19.670 | 17.629 | 20.417 |\\n| **GVFi (Ours)** | 29.958 | 32.260 | 31.384 | 29.527 | 28.958 | 30.417 |\"}", "{\"title\": \"Summary of Updates\", \"comment\": [\"We would like to express our gratitude to all reviewers for your valuable comments, and we have made significant improvements to our paper. Below is a consolidated summary of the changes made.\", \"**Evaluation on newly captured real-world scenes:** We capture 4 real-world dynamic scenes with 20 GoPro cameras and evaluate our method and baselines on them for future frame extrapolation. The leading performance on these new challenging real-world scenes consolidate the superiority of our method in physics learning.\", \"**Quantitative evaluation of object segmentation:** We follow NVFi to evaluate the rendered 2D object segmentation masks on Dynamic Indoor Scene dataset, where our method produces almost perfect results and significantly outperforms all baselines.\", \"**Evaluation with incremental learning:** We conduct incremental learning by taking gradually increasing observation frames for training. Our method can stably adapt to new observations and make reasonable predictions within different observation time window, demonstrating strong potentials to applications like robot planning.\", \"**Additional baselines \\\\& adaption with baselines:** We add two baseline methods, 4DGS and E-D3DGS, both of which fail to extrapolate future frames as ours. However, a combination of our translation rotation dynamics system with 4DGS shows strong capability in extrapolation, demonstrating the flexibility of our proposed physics learning module.\", \"**Experiments for long-term extrapolation:** We experiment on longer extrapolation with our model, demonstrating physically meaningful results on all datasets for extremely long-term extrapolation.\", \"**Additional ablation studies:** We complete ablation studies about $\\\\delta t$ on all datasets. Besides, we ablate learning different orders of velocity representations, where higher-order representations benefit complex motions with evolving accelerations as expected.\", \"**Experiments with different ways of interpolation:** We experiment with different ways of interpolation within our framework. The similar results to another way consolidates the accurate physical motion representations learned by our method.\", \"Overall, our additional experiments clearly demonstrate the superiority of our method over all baselines on five datasets. We believe that our rebuttal materials adequately address all your concerns. We will release all our code, datasets, trained models to the community.\"]}", "{\"comment\": \"We appreciate the reviewer's valuable comments and address the concerns below.\\n\\n**Q1: Second order taylor series expansion seems quite limiting for arbitrary motion, or motion over non-trivial time horizons.**\\n\\n**A1:** For clarification, our scheme in Equation 4 indeed follows a second-order relationship to update dynamics parameters for each rigid particle from time $0$ to $t$. It captures up to a constant acceleration from $0$ to $t$, meaning that forces are allowed to generate accelerations and transfer energy.\\n\\nTheoretically, our updating scheme in Equation 4 can be easily extended to higher orders to capture extremely complex dynamics such as self-propelled objects (Refer to first-/third- order ablations in below response A7). \\n\\nIn addition, as suggested by the reviewer **j1SG**, a simple sliding window based approach can also be applied to continuously and incrementally predict future frames given the newest visual observations from sensors, such that the complex dynamics can be well-captured.\\n\\nTo validate this, we conduct experiments for incremental learning on three self-propelled objects from the Dynamic Object Dataset. To be specific, we first feed time $t=0\\\\sim 0.15$ to train the network, and evaluate novel view interpolation on $t=0\\\\sim 0.15$, future frame extrapolation on $t=0.15\\\\sim 0.30$. Next, we include $t=0.15\\\\sim 0.30$ to train, and evaluate novel view interpolation on $t=0\\\\sim 0.30$, future frame extrapolation on $t=0.30\\\\sim 0.45$. We keep adding a time interval of 0.15 till we train from $t=0\\\\sim 0.75$, and extrapolate from $t=0.75\\\\sim 0.9$.\\n\\nThe following Table (Table 5 in revised paper) shows quantitative results. It can be seen that DefGS suffers from overfitting the previous timestamps and its interpolation performance decreases, while our model can stably adapt to new observations and achieve excellent past and future frame predictions. This means that even though the internal forces are changing for self-propelled objects, our model can easily adapt to new observations. \\n\\nIn the revised paper, we have rephrased the descriptions about Equation 4 in lines 235-254 of Section 3.2.\\n\\n**Table:** *Quantitative results (PSNR) of incremental learning.*\\n| Interpolation | $0.15\\\\rightarrow0.30$ | $0.30\\\\rightarrow0.45$ | $0.45\\\\rightarrow0.60$ | $0.60\\\\rightarrow0.75$ | $0.75\\\\rightarrow0.90$ | Average |\\n|---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| DefGS | 39.386 | 38.745 | 35.818 | 34.531 | 27.904 | 35.277 |\\n| **GVFi (Ours)** | 40.032 | 40.706 | 41.013 | 40.466 | 39.971 | 40.438 |\\n\\n| Extrapolation | $0.15\\\\rightarrow0.30$ | $0.30\\\\rightarrow0.45$ | $0.45\\\\rightarrow0.60$ | $0.60\\\\rightarrow0.75$ | $0.75\\\\rightarrow0.90$ | Average |\\n|---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| DefGS | 23.438 | 21.360 | 19.989 | 19.670 | 17.629 | 20.417 |\\n| **GVFi (Ours)** | 29.958 | 32.260 | 31.384 | 29.527 | 28.958 | 30.417 |\\n\\n**Q2: Assuming I am interpreting the paper correctly, experiments seem to be only over short (\\\\~1 second) time horizons, which don't seem like they would challenge this assumption.**\\n\\n**A2:** For clarification, the total time interval $0\\\\sim 1$ is normalized (virtual) time for easy processing. In practice, it can surely be more than 1 second. \\n\\nAs also requested by the reviewer **3rJ7**, we further conduct experiments for much longer extrapolation. Particularly, in our main experiments, training period lasts from $t=0\\\\sim 0.75$ and extrapolation period lasts from $t=0.75\\\\sim 1.0$. Here we show the results till $t=1.5$, which is already twice the training period. \\n\\nAs shown in Figure 21 of Appendix A.15 in the revised paper, we provide qualitative results of longer extrapolation from the total four datasets. Note that, we are unable to provide quantitative results due to the lack of ground truth images. We can see that our method can still obtain physically meaningful future frame prediction in particularly high quality.\\n\\n**Q3: Presentation quality is extremely poor. Core concept is quite simple, but it's heavily obfuscated for no apparent reason. It could be explained in 1 paragraph.**\\n\\n**A3:** In the revised paper, in Sections 3.2\\\\&3.3, we have condensed the core techniques of the proposed translation rotation dynamics system. The revised version is now more concise.\"}", "{\"summary\": \"Paper proposes a method \\\"GVFi\\\" tackles the problem of estimating dynamic 3D scenes.\\n\\nBroadly speaking, GVFi\\n - Uses an off the shelf method (3DGS) to compute gaussian splats in a canonical frame\\n - Uses an off the shelf method (\\\"Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction\\\" Yang et al., CVPR 2024) to estimate a deformation field over position, rotation, and scale of each gaussian as a function of time\\n - Uses these as inputs to then estimate the 3D gaussian's motion \\n\\nImportantly, these gaussians are parameterized as rotation around a moving rotation centerpoint, and this centerpoint's motion is described entirely by an initial position, velocity, and acceleration estimate. These estimates are then optimized against the flow field as noisy ground truth and training observation reconstruction losses.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Method at its core is quite simple (this is a good thing)\", \"Learning a second order taylor series expansion of the full trajectory\", \"The quantitative results seem good, even if only minor improvements in a number of cases\"], \"weaknesses\": [\"Second order taylor series expansion seems quite limiting for arbitrary motion, or motion over non-trivial time horizons\", \"Assuming I am interpreting the paper correctly, experiments seem to be only over short (~1 second) time horizons, which don't seem like they would challenge this assumption\", \"Presentation quality is *extremely* poor\", \"Core concept is quite simple, but it's heavily obfuscated for no apparent reason. It could be explained in 1 paragraph.\", \"Core concepts seem poorly motivated; physics priors are common, but why only a second order expansion? Is this really a reasonable assumption in practice? There needs to be more motivation to this choice and more careful analysis of its limitations\", \"Figure 1 and 2 are almost the same thing but not very informative. A better figure would be demonstrating the taylor series expansion of a single gaussian's trajectory\", \"The math in section 3 does not feel like it was put there to be informative, but instead to intimidate the reader; after climbing through the notation its basically just saying to compose offsets together to estimate motion. If the authors feel this notational exercise is needed (don't think it is), it should go in the appendix and the main paper should have far more explanatory figures.\", \"Ablations do not seem to address the core contribution, which is the assumption of the second order expansion --- what if you only do a first order expansion? Can you attempt to extend this to third order? They briefly mention replacing it with an MLP, but minimal details are provided.\", \"I'm of the opinion that the paper has a neat idea but its presentation needs to be dramatically overhauled --- its assumptions need to be clearly stated and examined as reasonable or not, and it needs to have experiments where the method is pushed. Looking at the qualitative results, these datasets are very simple partwise rigid motion and the taylor series expansion is a nice trick to force smooth non-shattering motion, but it comes at the cost of generality --- nowhere does this seem to be addressed, considering the sometimes marginal performance improvements over far more flexible prior methods.\"], \"nit\": \"\\\"Cononical\\\" -> Canonical misspelling is rampant\", \"questions\": [\"How long are each of the datasets scenes? Are they really long enough to meaningfully challenge the assumption of second order expansion?\", \"The NVIDIA Dynamic Scene Dataset (Yoon 2020) contains many dynamic scenes in the 2020 paper, but this paper claims \\\"it consists of two real-world dynamic 3D scenes\\\", what are those scenes?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Comment \\\\#1: Limitations of the Core Assumptions -**\\n\\n**This work relies on a strong assumption that the motion occurs without external forces and that the objects in motion are rigid. While this assumption has led to favorable performance in the reported experiments, particularly in the extrapolation mode, I believe that the performance advantage is largely due to the characteristics of the chosen dataset rather than the generalizability of the method.**\\n\\n**Nonetheless, the actual results reveal significant limitations that question the validity of this assumption. For instance, in Figure 13 of the revised paper, while the overall predictions for the skateboarder\\u2019s body are relatively accurate, closer examination reveals that the skateboarder\\u2019s hand undergoes noticeable deformations during motion, which clearly violates the rigidity assumption. This results in poor predictions for finer details and highlights the method\\u2019s inability to handle non-rigid components effectively in real-world scenarios. Furthermore, in the extrapolation mode, the skateboard itself is not reconstructed, which is a critical failure given its integral role in the motion context. These results cast doubt on the method\\u2019s robustness and its applicability beyond datasets that closely conform to the rigid-body assumption.**\\n\\n**Response \\\\#1:** We would bring attention to the key facts that:\\n- First, we **never** assume motion without external forces, but with constant (or constantly changing in third order type) forces. Such examples include falling balls in gravity and all self-propelled objects in the datasets.\\n- Second, we **never** assume the object's motion is rigid. An object comprises numerous independent particles. A single particle's motion is rigid, but the resulting compounded object motion can be extremely complex. In the datasets, our method can exactly model many self-propelled deformable objects. \\n\\nAs to Figure 13, it should be noticed that all baselines achieve much worse results than ours. Such a scenarios is extremely challenging due to many potential factors such as limited training views and rapidly changing appearances, instead of being deformable. For example, 4DGS, the powerful baseline recommended by you, totally fails to reconstruct the skater, but our method still achieves reasonable future extrapolation. \\n\\nWe respect the reviewer's strong desire to see a groundbreaking solution in this field of study. However, a single failure case should not be a reason to deny the value of a method which shows the best results. \\n\\n---\\n\\n**Comment \\\\#2: Applicability of the Method and Alternative Approaches -**\\n \\n**I remain skeptical about the practicality of extrapolation based on such a strong assumption. In scenarios where this assumption holds true, simpler reconstruction-based editing approaches may achieve similar or even better outcomes. For example, after reconstruction, it is relatively straightforward to calculate motion properties such as velocity and momentum, which can then estimate the approximate future positions of objects.**\\n\\n**From the visual results provided by the authors (again, referring to Figure 13), the overall visual quality of the extrapolated objects is very poor, even when compared to what could potentially be achieved using the simpler editing method mentioned above. This raises serious doubts about the significance and practical utility of the extrapolation mode, and I strongly question whether it is meaningful in its current form.**\\n\\n**Response \\\\#2:** Regarding the mentioned reconstruction-based editing approach, it is virtually impossible in practice.\\n\\n- First, it always requires human knowledge in the loop, because you need to manually segment the interested parts and then apply your personally estimated motion onto them. The underlying physics is never learned, but from user's experience. \\n- Second, the mentioned approach can only work for linear motions without any rotations. If we need to estimate rotation information from the calculated velocity, we have to decompose the velocities into object groups and regress the rotation information, which requires accurate motion segmentation. However, as demonstrated by Table 3 and Figures $17\\\\sim 20$, all existing baselines fail to accurately segment objects, let alone making accurate dynamics estimation. \\n\\nThe reviewer constantly criticizes a single failure case, while ignoring the fact that all existing baselines are much worse than ours on that case. \\n\\nAgain, we respect the reviewer's opinion, but also question whether such a judgment without considering the current development of the field is valid and professional.\"}", "{\"comment\": \"Thank the authors for their valuable feedback and for addressing my questions. While my concerns have been resolved, I share the generalization and novelty concerns raised by reviewers 3rJ7, i7XQ, and j1SG. I will maintain my score as a borderline accept.\"}", "{\"comment\": \"**Q7: Questions 4) The authors claim that their framework is a general approach for modeling motion physics in complex dynamic 3D scenes. However, the datasets used, with only 60 frames in total, limit the complexity and extent of motion. Could the authors validate this claim by testing on more challenging synthetic and real-world datasets, such as the ParticleNeRF and PanopticSports datasets, to provide a more comprehensive evaluation of the framework\\u2019s effectiveness on complex scenes?**\\n\\n**A7:** For clarification, the total 60 frames are sampled frames, and the total time interval $0\\\\sim 1$ is also normalized (virtual) time for easy processing with limited computation resources. In principle, our method does not have specific requirements on the actual frame rate. \\n\\nThank you for suggesting ParticleNeRF and PanopticSports datasets. After a close investigation of the two datasets, we found that ParticleNeRF involves springs or cloth and PanopticSports involves random human interactions. These dynamics are rather chaotic and beyond the scope of this paper given the limited rebuttal time. Nevertheless, we agree that it is interesting and we leave it for future exploration. \\n\\nInstead, we collect a new challenging real-world dataset by 20 GoPro cameras, named **GoPro Dataset**. Our dataset captures 4 dynamic scenes. For each dynamic scene, we select 89 frames from each view, and resize images to be a resolution of $960\\\\times540$. We reserve the first 67 frames at 17 picked viewing angles as the training split, _i.e._, 1139 frames, while leaving the 67 frames at the remaining 3 viewing angles for evaluating _novel view interpolation_ within the training time period, _i.e._, 201 frames. We keep the last 22 frames at all 20 viewing angles for evaluating _future frame extrapolation_, _i.e._, 440 frames in total. More details are in Appendix A.6.\\n\\nThe following Table 6 (Table 3 in revised paper) shows the results of our method and baselines. We can see that our method is significantly better than DefGS/NVFi/TiNeuVox and also surpasses the strongest baseline built by us, demonstrating the effectiveness of our method on challenging real-world 3D scenes. \\n\\nIn the revised paper, we have added our newly collected dataset and the new results in Section 4, significantly improving the quality of our paper.\\n\\n**Table 6:** _Quantitative results for both novel view interpolation and future frame extrapolation on GoPro Dataset._\\n| | | | GoPro | Dataset | | |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| TiNeuVox | 15.306 | 0.588 | 0.516 | 20.323 | 0.738 | 0.318 |\\n| NVFi | 14.229 | 0.568 | 0.569 | 19.879 | 0.736 | 0.415 |\\n| DefGS | 20.018 | **0.838** | **0.167** | 21.193 | 0.842 | 0.185 |\\n| DefGS$_{nvfi}$ | **20.254** | **0.838** | **0.167** | _25.469_ | _0.882_ | _0.141_ |\\n| **GVFi** (Ours) | _20.124_ | _0.834_ | _0.168_ | **26.276** | **0.890** | **0.131** |\"}", "{\"comment\": \"**Q8: I'm of the opinion that the paper has a neat idea but its presentation needs to be dramatically overhauled --- its assumptions need to be clearly stated and examined as reasonable or not, and it needs to have experiments where the method is pushed. Looking at the qualitative results, these datasets are very simple partwise rigid motion and the taylor series expansion is a nice trick to force smooth non-shattering motion, but it comes at the cost of generality --- nowhere does this seem to be addressed, considering the sometimes marginal performance improvements over far more flexible prior methods.**\\n\\n**A8:** As requested, in the revised paper, we have: 1) rephrased the assumption, motivation and scope of our method; 2) included more ablation results on first-/ third- order relationships; 3) collected and evaluated on a new challenging real-world dataset; 4) conducted incremental learning for complex dynamic 3D scenes; 5) conducted extremely long extrapolation on four datasets; 6) added two new recent baselines 4DGS and E-3DGS, and more. \\n\\nMost notably, compared with all existing NeRF-based and 3DGS-based methods including T-NeRF/D-NeRF/TiNeuVox/DefGS/4DGS/E-D3DGS, our method surpasses them by at least 5-10 points on PSNR for the core problem of future frame extrapolation on five datasets. The state-of-the-art method NVFi and another baseline built by us are also clearly inferior to our method for future frame extrapolation. \\n\\nTo the best of our knowledge, there are no ``far more flexible prior methods\\\" for future frame extrapolation. We are happy to compare if the reviewer suggests such new methods. Overall, we believe the reviewer's core concerns have been clearly addressed in our revised paper. \\n\\n**Q9: Nit: \\\"Cononical\\\" $\\\\rightarrow$ Canonical misspelling is rampant.**\\n\\n**A9:** Typos fixed. \\n\\n**Q10: How long are each of the datasets scenes? Are they really long enough to meaningfully challenge the assumption of second order expansion?**\\n\\n**A10:** Refer to A2.\\n\\n**Q11: The NVIDIA Dynamic Scene Dataset (Yoon 2020) contains many dynamic scenes in the 2020 paper, but this paper claims \\\"it consists of two real-world dynamic 3D scenes\\\", what are those scenes?**\\n\\n**A11:** For clarification, our experiment setting on NVIDIA Dynamic Scene Dataset exactly follows prior work NVFi for a fair comparison. The two scenes (*skating man* and *moving truck*) are selected by NVFi, not us. \\n\\nTo further validate the effectiveness of our method on challenging real-world scenes, we collect a new challenging real-world dataset by 20 GoPro cameras, named **GoPro Dataset**. Our dataset captures 4 dynamic scenes. For each dynamic scene, we select 89 frames from each view, and resize images to be a resolution of $960\\\\times540$. We reserve the first 67 frames at 17 picked viewing angles as the training split, *i.e.*, 1139 frames, while leaving the 67 frames at the remaining 3 viewing angles for evaluating *novel view interpolation* within the training time period, *i.e.*, 201 frames. We keep the last 22 frames at all 20 viewing angles for evaluating *future frame extrapolation*, *i.e.*, 440 frames in total. More details are in Appendix A.6.\\n\\nThe following Table (Table 3 and Table 14 in revised paper) shows the results of our method and baselines. We can see that our method is significantly better than DefGS/NVFi/TiNeuVox and also surpasses the strongest baseline built by us, demonstrating the effectiveness of our method on challenging real-world 3D scenes. \\n\\nIn the revised paper, we have added our new real-world dataset and results in Section 4.1 and Appendix A.14 Table 14. \\n\\n**Table:** *Quantitative results of all methods for both novel view interpolation and future frame extrapolation on GoPro data.*\\n| | | | GoPro | Dataset | | |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | | Interpolation | | | Extrapolation | |\\n| | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ | PSNR$\\\\uparrow$ | SSIM$\\\\uparrow$ | LPIPS$\\\\downarrow$ |\\n| TiNeuVox | 15.306 | 0.588 | 0.516 | 20.323 | 0.738 | 0.318 |\\n| NVFi | 14.229 | 0.568 | 0.569 | 19.879 | 0.736 | 0.415 |\\n| DefGS | 20.018 | **0.838** | **0.167** | 21.193 | 0.842 | 0.185 |\\n| DefGS$_{nvfi}$ | **20.254** | **0.838** | **0.167** | _25.469_ | _0.882_ | _0.141_ |\\n| **GVFi** (Ours) | _20.124_ | _0.834_ | _0.168_ | **26.276** | **0.890** | **0.131** |\"}" ] }
0ZcQhdyI3n
LSH Tells You What To Discard: An Adaptive Locality-Sensitive Strategy for KV Cache Compression
[ "Tahseen Rabbani", "Minghui Liu", "Tony O'Halloran", "Ananth Sankaralingam", "Mary-Anne Hartley", "Furong Huang" ]
Transformer-based large language models (LLMs) use the key-value (KV) cache to significantly accelerate inference by storing the key and value embeddings of past tokens. However, this cache consumes significant GPU memory. In this work, we introduce LSH-E, an algorithm that uses locality-sensitive hashing (LSH) to compress the KV cache. LSH-E quickly locates tokens in the cache that are cosine dissimilar to the current query token. This is achieved by computing the Hamming distance between binarized Gaussian projections of the current token query and cached token keys, with a projection length much smaller than the embedding dimension. We maintain a lightweight binary structure in GPU memory to facilitate these calculations. Unlike existing compression strategies that compute attention to determine token retention, LSH-E makes these decisions pre-attention, thereby reducing computational costs. Additionally, LSH-E is dynamic -- at every decoding step, the key and value of the current token replace the embeddings of a token expected to produce the lowest attention score. We demonstrate that LSH-E can compress the KV cache by 30\%-70\% while maintaining high performance across reasoning, multiple-choice, long-context retrieval and summarization tasks.
[ "kv cache", "locality-sensitive hashing", "compression" ]
https://openreview.net/pdf?id=0ZcQhdyI3n
https://openreview.net/forum?id=0ZcQhdyI3n
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zKz3cSfoIJ", "yK5nx06uTD", "xSGn6ksfTo", "x8OPs6R67b", "r6rpq2p4k6", "pDyBW8LuV9", "oopY1odpRV", "ododg2Bt44", "oBud8EERZK", "lwLugTIqdr", "l1EXUfIRqN", "d0GXq7685S", "bctE84V3zU", "aEvcZQgdfy", "W9qVJJsXld", "W9U0bbn74Y", "W0YsTFKpjK", "Vhnxz0RY4K", "VBQWqo7xGE", "SYysewHFv0", "SM04FIcLFT", "Pih8oXOe5u", "PX4lCXE9hU", "NsJdlMqcyU", "MG6zvJRcdk", "MFbUxP5VFe", "La4guAZDyP", "KddJ9TnsEc", "JYx7Kv1FzR", "J4s1HJo49s", "IpZM7A3ZtF", "CWntvWn02U", "5TQXi9fvMn", "4x3iJJQZ3x", "2gMbJRVn7a", "1k9HPSop3X", "0jHhJaRhQb", "0ZEv00tiIX", "08djPh379u" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1733103226815, 1732396452473, 1733163714313, 1732760771278, 1730701323200, 1732772345212, 1731441980121, 1732761255073, 1732769371944, 1733201520307, 1732762659844, 1733026918302, 1732770219695, 1733229231285, 1737506190634, 1732767099087, 1732761276822, 1733026900699, 1733212065682, 1732764986171, 1733156606085, 1733030133791, 1733200125637, 1732772248678, 1730716280569, 1732768761644, 1733200103257, 1732769105688, 1730696058369, 1732767047009, 1733026907616, 1733026386940, 1732760923101, 1733026924302, 1733205373749, 1732765825641, 1730699749147, 1732770642497, 1733026889966 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_HtGz" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_a2yh" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_NvGH" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_R9hV" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_a2yh" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_HtGz" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_a2yh" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_yqyi" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_a2yh" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_HtGz" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Reviewer_rWSu" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ], [ "ICLR.cc/2025/Conference/Submission6181/Authors" ] ], "structured_content_str": [ "{\"title\": \"Follow-up author response\", \"comment\": \"Thank you for your follow-up feedback. We address them below.\\n\\n>> However, the size of the KV cache scales quadratically with sequence length n and linearly with the number of attention layers and heads. (Line 38-39)\\n> This is not true\\u2014the size of the KV cache scales linearly with sequence length n.\\n\\nThis indeed was a typo which will be amended to reflect that the KV cache scales linearly with sequence length. This was likely spliced from text which indicated that attention without caching scales quadratically. It will be amended.\\n\\n>> For example, maintaining the KV cache for a sequence of 4K tokens in half-precision (FP16) can require approximately \\u223c16GB of memory for most models within the Llama 3 family (Dubey et al., 2024). (Line 42-43)\\n> This is also not true. 4K context length only occupies 500MB for Llama3-8B or 1.25GB for Llama3-70B or 2GB for Llama3-405B.\\n\\nThis was a typo as well. It should have followed this formula (accounting for $K$ and $V$ matrices in the cache):\\n\\n**Total size of KV cache (bytes) = (batch size) * (sequence length) * 2 * (num layers) * (hidden size) * sizeof(FP16)**\", \"some_example_calculations\": \"for a sequence length of 4000 this is approximately ~2 GB for Llama2-7B. For Llama3-8B, since it typically uses 8 attention heads for keys and values (via grouped-query attention) this results in the 500MB calculation that you mentioned. This formula is commonly used for rough estimation of memory complexity, for example, [on this NVIDIA guide.](https://developer.nvidia.com/blog/mastering-llm-techniques-inference-optimization/) This statement will be amended along with inclusion of this formula.\\n\\n\\n\\n> Usually, when the context size is not very large, the majority of time is spent on MLP instead of attention. Typically, the boundary lies between 16 K and 32 K, depending on the model arch and GPUs.\\n\\n\\n> To make sure readers well understand the technique presented in the paper, I will ask for\\n\\n> The average context length of the benchmark tested, especially the benchmark with the longest average context lengths.\\n\\nThanks for the clarifying question. Please see the table below for the statistics on the long context-retrieval datasets used in this work. The tokenizer used is from the Llama3-8B-Instruct model. \\n\\n|Task|Number of Samples| Avg Prompt Tokens | Max Prompt Tokens | Min Prompt Tokens | Std Prompt Tokens |\\n|-----------|------|----------|-------|------|---------|\\n| Ruler Common Words | 500 | 3791.21 | 3980 | 3613 | 68.29 |\\n| Ruler Needle-In-A-Haystack | 500 | 3819.52 | 3831 | 3811 | 3.49 |\\n| LongBench MultiNews | 200 | 2650.11 | 13977 | 172 | 2133.29 |\\n| LongBench GovReport | 200 | 10286.41 | 51438 | 2065 | 6687.87 |\\n\\n> The GPU used in the experiments (and the framework, e.g., TensorRT-LLM, vLLM, SGLang, MLC-LLM, or native pytorch/Jax).\\n\\nWe used an Nvidia H100 80GB for the two LongBench summarization tasks (GovReport and MultiNews) and an Nvidia L4 for all other tasks. We used cold-compress as our testing framework, which is implemented in PyTorch. These benchmark statistics will be added to our next revision within the Appendix and referenced within our \\\"Experiments\\\" section.\"}", "{\"summary\": \"LLMs utilize KV cache to accelerate inference but take up significant GPU memory. LSH-E is an algorithm that uses LSH to compress the KV cache by evicting tokens that are cosine dissimilar. The token eviction happens pre-attention, thus making this method computationally affordable.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The small size of the KV cache allows it to be stored in GPU memory, eliminating latency from moving data between CPU and GPU.\\n2. KV cache eviction happens before attention computation, cutting down on unnecessary and expensive attention computations.\\n3. The greedy eviction approach makes it computationally very affordable.\", \"weaknesses\": \"1. It would be helpful to have an ablation study of LSH-E's performance with different numbers of first and recent tokens cached.\\n2. The benchmarks seem limited; there are only two datasets per task and the improvement over the baseline is not very significant in Needle-in-a-Haystack, Common Words, and MedQA Multiple Choice.\\n3. Evaluation does not include end-to-end speedup numbers, making it more difficult to see the ultimate impact of the contribution.\\n4. The greedy eviction algorithm assumes that the attention score between a particular key vector and the current query vector is representative of the attention score with subsequent query vectors. While there is ample empirical exploration on the correlation between attention and inverted LSH hamming distance, I could not find provable theoretical guarantees about the quality of the KV cache under this greedy eviction strategy or empirical observations about the consistency of attention scores across query vectors that suggest the soundness of this assumption. This is in contrast to other greedy approaches such as H2O that uses *accumulated* attention to be more robust to variations between individual query tokens.\", \"questions\": \"1. Under \\\"Configuration and Setup\\\", it is mentioned that you \\\"keep the most recent 10 tokens and the first 4 tokens of the prompt always in the KV cache.\\\" Is the L2 eviction baseline also configured this way?\\n2. How well does LSH-E perform without keeping the most recent 10 tokens and the first 4 tokens?\\n3. Is it possible to perform more evaluations on LongBench tasks?\\n4. Do you have empirical results that show that the attention score for the current token is a reasonable proxy for attention scores for subsequent token, or that a low attention score for a current query token implies that the key token will not be critical to subsequent query tokens?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response. I wonder why H2O suffers from a decrease in throughput compared to full attention.\"}", "{\"title\": \"Summary of Rebuttal Pt 1\", \"comment\": \"# General Comments\\n\\nThank you to all reviewers for your thoughtful feedback and constructive suggestions. Your comments have been invaluable in helping us refine and strengthen our work. We are encouraged by the recognition of the computational efficiency and simplicity of our proposed LSH-E method, as well as its potential as a practical strategy for KV cache compression in resource-constrained scenarios. \\n\\n# Reviewer Highlights\", \"we_would_like_to_summarize_the_highlights_that_reviwers_appreciated_in_our_paper\": \"* Reviewer HtGz believes that our attention-free approach is \\\"**computationally** very **affordable**\\\" and \\\"cuts down on unnecessary and expensive attention computations\\\".\\n* Reviewer NvGH, R9hV and a2yh commend our novel use of \\\"LSH to approximate attention computation\\\". NvGH comments that it \\\"contributes to both the **effectiveness** and **scalability** of the proposed method\\\". a2yh comments that \\\"the motivations and reasons why LSH can produce a good performance are **well discussed**.\\\"\\n* Reviewr yqyi remarks that our approach is \\\"**simple** yet **elegant**\\\" and that we did \\\"**good evaluations** on a range of use-cases\\\".\\n* Reviewer rWSu notes that our approach is \\\"**simple** and **clear** with **illustrative examples**\\\". \\n\\n# Summary of Changes\\n\\nTo address the feedbacks from the reviewer, we made the following improvements:\\n\\n## 1. Additional Benchmarks and Baselines\\nReviewers suggested that we should add tasks with even longer context length. In response, we expanded the experiments to include two new tasks from the LongBench benchmarks: MultiNews and GovReport. Both are long-context summarization tasks. \\n\\nAdditionally, comparisons to well-cited KV cache compression strategies, such as H2O, Scissorhands, and FastGen, were added to contextualize LSH-E's performance against state-of-the-art baselines. We have updated existing experiments in the paper to include these new baselines. We also provide results of the two summarization tasks in Table 1 below. \\n\\nIn these new experiments, LSH-E consistently demonstrats comparable or superior Rouge L scores across various cache budgets. In the MultiNews summarization task, LSH-E achieves higher Rouge L score at most cache budgets, outperforming all baselines, demonstrating LSH-E\\u2019s robustness and effectiveness in handling very large context lengths.\\n\\n## 2. Throughput Analysis\\nAnother addition to this rebuttal is the inclusion of throughput metrics. We provide decoding and prefill tokens per second results on the LongBench MultiNews task. LSH-E is 1.5-2x faster than H2O and Scissorhands, and 17x faster than FastGen at the prefill stage. Even without low-level optimizations (e.g., expressing hash tables in binary bits), LSH-E proved to be as fast as the L2 strategy in decoding and significantly faster than attention-based baselines. \\n\\nThis speedup was achieved while maintaining competitive quality metrics, demonstrating the computational efficiency of LSH-E. The throughput results address reviewer concerns about runtime metrics and substantiate the claimed computational benefits.\\n\\n### Table 1: Results of LongBench MultiNews Summarization with Throughput Metrics\\n| | | GovReport | MultiNews | | |\\n|---|---|---|---|---|---|\\n| Strategy | Cache Budget | Rouge L | Rouge L | Decode Toks Per Sec | Prefill Toks Per Sec |\\n| Full | 100% | 0.230 | 0.192 | 16.071 | 16573.492 |\\n| LSH-E | 30% | 0.202 | 0.180 | 22.880 | 20293.524 |\\n| L2 | 30% | 0.201 | 0.165 | 23.981 | 20628.160 |\\n| H2O | 30% | 0.219 | 0.175 | 21.555 | 13025.776 |\\n| Scissorhands | 30% | 0.214 | 0.175 | 21.448 | 13004.254 |\\n| LSH-E | 50% | 0.217 | 0.186 | 22.846 | 20459.961 |\\n| L2 | 50% | 0.214 | 0.174 | 16.013 | 15851.952 |\\n| H2O | 50% | 0.225 | 0.181 | 21.973 | 13969.985 |\\n| Scissorhands | 50% | 0.219 | 0.182 | 20.978 | 13549.967 |\\n| LSH-E | 70% | 0.223 | 0.187 | 22.914 | 21002.334 |\\n| L2 | 70% | 0.223 | 0.187 | 24.305 | 21303.763 |\\n| H2O | 70% | 0.229 | 0.184 | 21.793 | 14050.521 |\\n| Scissorhands | 70% | 0.226 | 0.183 | 21.705 | 13954.693 |\\n| LSH-E | 90% | 0.228 | 0.185 | 22.873 | 21229.230 |\\n| L2 | 90% | 0.230 | 0.186 | 24.010 | 21305.693 |\\n| H2O | 90% | 0.227 | 0.181 | 21.665 | 14007.697 |\\n| Scissorhands | 90% | 0.230 | 0.182 | 21.411 | 14025.440 |\\n| Fastgen | Attention recovery frac 70% | 0.192 | 0.129 | 12.752 | 1171.069 |\\n| Fastgen | Attention recovery frac 75% | 0.231 | 0.174 | 12.291 | 1157.987 |\\n| Fastgen | Attention recovery frac 80% | 0.232 | 0.184 | 11.850 | 1142.679 |\\n| Fastgen | Attention recovery frac 85% | 0.236 | 0.183 | 11.658 | 1164.689 |\"}", "{\"summary\": \"This paper introduces LSH-E, an algorithm for compressing the key-value (KV) cache in large language models (LLMs) using locality-sensitive hashing (LSH). Despite the availability of prior work\\u2014including KDEformer, Hyperattention, SubGen, and QJL\\u2014that similarly utilizes LSH for efficient attention and memory management, these related efforts are not cited here. LSH-E leverages Hamming distance calculations in a binary space following a Quantized Johnson-Lindenstrauss (JL) transform (SimHash) to identify and evict tokens with low relevance to the current query, resulting in memory savings. This pre-attention approach provides a lightweight, GPU-efficient solution for long-context tasks, although its effectiveness ultimately depends on the algorithm\\u2019s CUDA implementation efficiency.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The use of theoretical approaches such as SimHash, a highly efficient hashing method, is a valuable aspect of this work, contributing to both the effectiveness and scalability of the proposed method.\", \"weaknesses\": [\"The term \\\"novel\\\" should not be used for LSH in this context, as it is not a new approach and has appeared in prior work. Specifically, the methods used in KDEformer, Hyperattention, QJL, and SubGen demonstrate significant overlap, yet these works are not cited here, despite their relevance.\", \"The experimental setup lacks comprehensiveness; comparisons with alternative methods like H2O, SubGen, and other established baselines should be included to provide a more robust evaluation.\", \"The datasets used in the experiments are not sufficiently large for evaluating performance in long-context scenarios. Given that these methods target long-sequence processing, experiments should ideally use token sizes over 50,000. LongBench or other large-scale datasets would be more appropriate for a thorough evaluation.\", \"Additionally, runtime metrics should be reported to assess the efficiency of token generation and to substantiate the computational benefits claimed in the paper.\"], \"kdeformer\": \"https://proceedings.mlr.press/v202/zandieh23a.html\", \"hyperattention\": \"https://arxiv.org/abs/2310.05869\", \"subgen\": \"https://arxiv.org/abs/2402.06082\", \"qjl\": \"https://arxiv.org/abs/2406.03482\", \"questions\": [\"Could you provide a plot showing the distortion error introduced by LSH compression across different levels of compression? Specifically, how does the approximation quality change as more tokens are evicted or as the quantization parameters are adjusted?\", \"Given that LSH-E\\u2019s efficiency largely depends on its CUDA implementation, can you elaborate on any specific optimizations made within the CUDA code?\", \"Could you clarify how LSH-E handles multi-head attention? Specifically, is each head processed separately with its own LSH compression, or is there a shared mechanism across heads?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors Pt 2\", \"comment\": \"### Weakness 5\\n> The discussion of the error introduced by the LSH is not included. I wonder what if we use cosine similarity to evict the cache instead of LSH, how will be the accuracy, latency, and memory usage?\\n\\nWe conducted attention loss analysis which approximates this error. Since our LSH projection is simply searching for large/small dot products, eviction via true cosine similarity would essentially be equivalent to conducting full attention with everything in the KV cache and removing the token with lowest attention score. It would be better to leverage a technique such as H$_2$O or ScissorHands which relies on accumulated attention in this scenario. In any case, it would result in $\\\\mathcal{O}(N^2)$ memory and $\\\\mathcal{O}(dN^2)$ computional complexity, where $d$ is the projection dimension, for a KV cache with $N$ tokens and worse latency due to the dot product calculation between floating-point vectors versus bit-wise comparison of Boolean hashes. Please let us know if this is not clear.\\n\\nBelow is an experiment of attention loss for LSH-E, L2 and Scissorhands, quantifying the discrepancy introduced by the eviction strategy compared to maintaining the full cache. We measured the atttention loss of each attention head and report the average. Attention loss is defined as the sum of the attention probabilities for evicted tokens. Or equivalently, 1 - the sum of the attention probabilities for the tokens in the compressed cache.\\n \\nThe attention loss was measured at 50% cache budget using prompts from the GSM8K question answering dataset. As per Table 5, all three methods have low attention loss at 50% cache budget, and LSH-E has lower attention loss compared to L2 and scissorhands, proving LSH-E's ability of keeping high attention tokens in the KV cache. By quantifying attention loss, we demonstrated that LSH-E introduces minimal deviation from full-cache attention. \\n\\n### Table: Attention Loss\\n| Strategy| Attention Loss |\\n|-----|------|\\n| LSH-E | 0.03357896805 |\\n| L2 | 0.03403072357 |\\n| Scissorhands | 0.04483547211 |\\n\\n\\n### Weakness 6\\n> In the supplementary materials, we see more experiments with more baselines that are better than L2. I wonder the reason why the authors do not include them.\\n\\nWe calculated multiple metrics from the same family. For example we calculated four different variations of Rouge: Rouge 1, 2, L and Lsum, and preicions, recall and F1 of BertScore. We also used GPT4 as a judge on four different metrics: similiarity to the ground truth, helpfulness, coherentness and faithfulness. LSH-E outperforms the baslines on all these metrics on most of the experiments. But due to page limitations we chose to show only the metrics that are most relevant to each task / dataset in the paper.\\n\\n\\n### Presentation 1\\n> Line 180 \\\"heavy hitters' -> ``heavy hitters'' P2. The axis captions of the figures are too small to be seen.\\n\\nThank you for pointing the typo. It has been fixed in the paper. We have also updated the figures to make the axis labels larger. \\n\\n---\\n\\nThank you for your review. If we have addressed your questions, we would appreciate it if you would consider updating your score. If any other questions or concerns remain, please let us know.\\n\\n\\n## Reference\\n\\n[1] Zhang, Z., Sheng, Y., Zhou, T., Chen, T., Zheng, L., Cai, R., ... & Chen, B. (2023). H2o: Heavy-hitter oracle for efficient generative inference [...].\\n\\n[2] Liu, Z., Desai, A., Liao, F., Wang, W., Xie, V., Xu, Z., ... & Shrivastava, A. (2024). Scissorhands: Exploiting the persistence [...]\\n\\n[3] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model tells you what to discard: Adaptive kv cache compression for llms.\\n\\n[4] Bai, Y., Lv, X., Zhang, J., Lyu, H., Tang, J., Huang, Z., ... & Li, J. (2023). Longbench: A bilingual, multitask benchmark for long context understanding.\"}", "{\"summary\": \"The idea is to reduce KV Cache by evicting and permanently dropping tokens at each position in the query. The heuristic used is to evict the lowest attention scored keys ( which is essentially similar to H2O / Scissorhands which preserve top attention scored keys). The difference is to use LSH to do a approximate score ranking to avoid SoftMax for exact computation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Uses LSH to approximate attention computation for eviction (if you compare to h2o / scissorhands)\", \"weaknesses\": [\"Novelty: The novelty is limited.\", \"H2O / Scissorhands are known to not perform well on longbenchmark. Can we see some results on longbenchmark like passage retrieval datasets ?\", \"Missing baselines --only baseline used is L2 norm.\", \"Limited evaluation. can we get more results on longbenchmark at different budgets with standard baselines.\"], \"questions\": \"see questions above,\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary of Rebuttal Pt 3\", \"comment\": \"## 7. Updated Ablation Studies on Attention Sink Tokens and Recent Tokens\\n\\nWe performed additional ablations including using only sink and recent tokens as a strategy, and L2 and LSH-E with only sink tokens, and with only recent tokens. The LSH dimension was set to 16 bits. The number of sink tokens is 4 and the number of recent tokens is 10 except for the Sink & Recent strategy, which keeps (cache_size - 4) most recent tokens. From the results we found that sink tokens have a bigger impact on the performance of LSH while recent tokens impact L2 more. Please see updated results in Table 4 below. \\n\\n#### Table 4: Ablation of Attention Sink Tokens and Recent Tokens on GSM8K Free Response Question Answering\\n| Cache Budget | Strategy | Bert F1 | Rouge L | GPT Rouge | GPT Coherence | GPT Faithfulness | GPT Helpfulness |\\n|---|---|---|---|---|---|---|---|\\n| 10% | LSH-E | 0.831 | 0.157 | 1.018 | 1.387 | 1.147 | 1.083 |\\n| 10% | LSH-E no sink no recent | 0.708 | 0.025 | 1.000 | 1.000 | 1.000 | 1.000 |\\n| 10% | LSH-E no sink | 0.713 | 0.027 | 1.000 | 1.000 | 1.000 | 1.000 |\\n| 10% | LSH-E no recent | 0.847 | 0.189 | 1.100 | 2.002 | 1.348 | 1.326 |\\n| 10% | L2 | 0.826 | 0.151 | 1.005 | 1.293 | 1.098 | 1.033 |\\n| 10% | L2 no sink no recent | 0.804 | 0.130 | 1.000 | 1.088 | 1.030 | 1.016 |\\n| 10% | L2 no sink | 0.836 | 0.178 | 1.026 | 1.600 | 1.138 | 1.096 |\\n| 10% | L2 no recent | 0.829 | 0.171 | 1.014 | 1.394 | 1.098 | 1.032 |\\n| 10% | Sink & Recent | 0.843 | 0.176 | 1.040 | 1.882 | 1.298 | 1.248 |\\n| 30% | LSH-E | 0.873 | 0.341 | 2.520 | 3.767 | 3.216 | 3.190 |\\n| 30% | LSH-E no sink no recent | 0.744 | 0.068 | 1.004 | 1.024 | 1.018 | 1.006 |\\n| 30% | LSH-E no sink | 0.744 | 0.066 | 1.006 | 1.018 | 1.028 | 1.002 |\\n| 30% | LSH-E no recent | 0.873 | 0.342 | 2.546 | 3.956 | 3.340 | 3.472 |\\n| 30% | L2 | 0.865 | 0.288 | 1.356 | 2.428 | 1.895 | 1.841 |\\n| 30% | L2 no sink no recent | 0.844 | 0.228 | 1.040 | 1.478 | 1.292 | 1.268 |\\n| 30% | L2 no sink | 0.865 | 0.290 | 1.474 | 2.750 | 2.010 | 2.102 |\\n| 30% | L2 no recent | 0.846 | 0.238 | 1.032 | 1.478 | 1.320 | 1.272 |\\n| 30% | Sink & Recent | 0.868 | 0.310 | 1.910 | 3.432 | 2.616 | 2.682 |\\n| 50% | LSH-E | 0.880 | 0.393 | 3.457 | 4.530 | 4.212 | 4.241 |\\n| 50% | LSH-E no sink no recent | 0.803 | 0.178 | 1.322 | 1.570 | 1.696 | 1.424 |\\n| 50% | LSH-E no sink | 0.802 | 0.179 | 1.362 | 1.554 | 1.684 | 1.440 |\\n| 50% | LSH-E no recent | 0.880 | 0.399 | 3.624 | 4.638 | 4.338 | 4.446 |\\n| 50% | L2 | 0.875 | 0.355 | 2.190 | 3.494 | 3.035 | 3.027 |\\n| 50% | L2 no sink no recent | 0.866 | 0.318 | 1.548 | 2.690 | 2.320 | 2.308 |\\n| 50% | L2 no sink | 0.876 | 0.359 | 2.492 | 3.710 | 3.170 | 3.276 |\\n| 50% | L2 no recent | 0.866 | 0.319 | 1.570 | 2.686 | 2.382 | 2.336 |\\n| 50% | Sink & Recent | 0.879 | 0.385 | 3.412 | 4.488 | 4.054 | 4.122 |\\n| 70% | LSH-E | 0.881 | 0.401 | 3.734 | 4.671 | 4.404 | 4.444 |\\n| 70% | LSH-E no sink no recent | 0.847 | 0.295 | 2.350 | 2.818 | 2.912 | 2.612 |\\n| 70% | LSH-E no sink | 0.847 | 0.295 | 2.332 | 2.794 | 2.888 | 2.600 |\\n| 70% | LSH-E no recent | 0.881 | 0.402 | 3.884 | 4.790 | 4.546 | 4.650 |\\n| 70% | L2 | 0.879 | 0.386 | 2.934 | 4.184 | 3.817 | 3.820 |\\n| 70% | L2 no sink no recent | 0.876 | 0.374 | 2.684 | 3.836 | 3.510 | 3.528 |\\n| 70% | L2 no sink | 0.879 | 0.390 | 3.266 | 4.370 | 4.018 | 4.104 |\\n| 70% | L2 no recent | 0.876 | 0.374 | 2.718 | 3.842 | 3.522 | 3.516 |\\n| 70% | Sink & Recent | 0.881 | 0.401 | 3.810 | 4.720 | 4.428 | 4.508 |\\n| 90% | LSH-E | 0.881 | 0.403 | 3.837 | 4.722 | 4.468 | 4.525 |\\n| 90% | LSH-E no sink no recent | 0.868 | 0.363 | 3.222 | 3.784 | 3.826 | 3.618 |\\n| 90% | LSH-E no sink | 0.869 | 0.363 | 3.248 | 3.822 | 3.854 | 3.628 |\\n| 90% | LSH-E no recent | 0.882 | 0.406 | 4.018 | 4.788 | 4.562 | 4.650 |\\n| 90% | L2 | 0.881 | 0.400 | 3.569 | 4.578 | 4.324 | 4.361 |\\n| 90% | L2 no sink no recent | 0.880 | 0.397 | 3.460 | 4.486 | 4.210 | 4.282 |\\n| 90% | L2 no sink | 0.881 | 0.402 | 3.752 | 4.658 | 4.388 | 4.470 |\\n| 90% | L2 no recent | 0.880 | 0.397 | 3.438 | 4.482 | 4.188 | 4.238 |\\n| 90% | Sink & Recent | 0.881 | 0.405 | 4.006 | 4.792 | 4.572 | 4.644 |\\n| 100% | Full | 0.882 | 0.403 | 3.845 | 4.716 | 4.499 | 4.545 |\"}", "{\"title\": \"Rebuttal by Authors Pt 3\", \"comment\": \"### Question 3\\n> Line 145: Formally for our setup, distd(x, y) cos \\u03b8x,y, here it is more a measure of cosine similarity than distance. Misleading, perhaps?\\n\\nSince LSH involves transferring a similarity measure in a higher-dimensional space (in our case, cosine similarity), to a similarity measure in a lower-dimensional space (in our case, Hamming distance), we used the notation $dist$ for notational convenience. We have clarified this and also emphasized we are not referring to cosine distance. \\n\\n### Question 4\\n> Line 419: did you mean \\\"LSH dimension does significantly impact performance\\\" --> does not?\\n\\nThank you for pointing this out. It was a typo and we mean \\\"does not\\\". We have fixed this error in the paper.\\n\\n---\\nThank you for your review. If we have addressed your questions, we would appreciate it if you would consider updating your score. If any other questions or concerns remain, please let us know.\\n\\n## References\\n\\n[1] Zhang, Z., Sheng, Y., Zhou, T., Chen, T., Zheng, L., Cai, R., ... & Chen, B. (2023). H2o: Heavy-hitter oracle for efficient generative inference [...].\\n\\n[2] Liu, Z., Desai, A., Liao, F., Wang, W., Xie, V., Xu, Z., ... & Shrivastava, A. (2024). Scissorhands: Exploiting the persistence [...]\\n\\n[3] Xiao, G., Tian, Y., Chen, B., Han, S., & Lewis, M. (2023). Efficient streaming language models with attention sinks.\\n\\n[4] Bai, Y., Lv, X., Zhang, J., Lyu, H., Tang, J., Huang, Z., ... & Li, J. (2023). Longbench: A bilingual, multitask benchmark for long context understanding.\\n\\n[5] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model tells you what to discard: Adaptive kv cache compression for llms.\\n\\n[6] Devoto, A., Zhao, Y., Scardapane, S., & Minervini, P. (2024). A Simple and Effective $ L_2 $ Norm-Based Strategy for KV Cache Compression. arXiv preprint arXiv:2406.11430.\\n\\n[7] Bai, Y., Lv, X., Zhang, J., Lyu, H., Tang, J., Huang, Z., Du, Z., Liu, X., Zeng, A., Hou, L. and Dong, Y. (2023). Longbench: A bilingual, multitask benchmark for long context understanding.\\n\\n[8] Charikar, M. S. (2002, May). Similarity estimation techniques from rounding algorithms.\\n\\n[9] Kitaev, N., Kaiser, \\u0141., & Levskaya, A. (2020). Reformer: The efficient transformer.\"}", "{\"comment\": \"> I wonder why H2O suffers from a decrease in throughput compared to full attention.\\n\\nBoth H2O and Scissorhands have lower throughput compared to full attention because of the overhead introduced by attention accumulation or attention averaging (Scissorhands). This is made more obvious by the very long prompts in the MultiNews task.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for the valuable feedback and suggestions. Below, we address all stated weaknesses and questions.\\n### Weakness 1\\n> There is no comparison with other static KV compression baselines, including H2O, streamingLLM, and SnapKV.\\n\\nThanks for this suggestion. We have added comparisons to several other well-cited KV cache compression strategies as baselines: H$_2$O [1], ScissorHands [2], and FastGen [3]. Our new results show that LSH-E performs comparably to H$_2$O and Scissorhands, and outperforms L2 and Fastgen on free form question answering. \\n\\nWe have also included two new tasks from LongBench [4]: MultiNews and GovReport. Both are long-context summarization tasks since this task type was missing from our suite of evaluations. Per Tables 1 and 2 our method demonstrates comparable or superior performance on the two new LongBench tasks across various KV cache budgets. In the MultiNews summarization task, LSH-E achieves higher Rouge L score at most cache budgets, outperforming all baselines.\\n\\n### Weakness 2\\n> Show metrics for latency or throughput, not just compression ratio.\\n\\nThanks for the suggestion. We have added throughput metrics on the LongBench GovReport summarization task in Table 4. LSH-E's prefill speed is **1.5-2x as fast as H$_2$O and Scissorhands** and **17x as fast** as FastGen even without low-level optimizations (i.e., expressing our hash tables in true binary bits). At the decoding stage, LSH-E is also comparable to L2 and faster than the other baseline methods. \\n\\n#### Table 4: Prefill and Decode Speed on LongBench MultiNewss Summarization\\n| Strategy | Cache Size | Rouge L | Decode Toks Per Sec | Prefill Toks Per Sec |\\n|------------------------|--------------------------------|---------|---------------------|-----------------------|\\n| Full | 100% | 0.192 | 16.071 | 16573.492 |\\n| LSH-E | 30% | 0.180 | 22.880 | 20293.524 |\\n| L2 | 30% | 0.165 | 23.981 | 20628.160 |\\n| H$_2$O | 30% | 0.175 | 21.555 | 13025.776 |\\n| Scissorhands | 30% | 0.175 | 21.448 | 13004.254 |\\n| LSH-E | 50% | 0.186 | 22.846 | 20459.961 |\\n| L2 | 50% | 0.174 | 16.013 | 15851.952 |\\n| H$_2$O | 50% | 0.181 | 21.973 | 13969.985 |\\n| Scissorhands | 50% | 0.182 | 20.978 | 13549.967 |\\n| LSH-E | 70% | 0.187 | 22.914 | 21002.334 |\\n| L2 | 70% | 0.187 | 24.305 | 21303.763 |\\n| H$_2$O | 70% | 0.184 | 21.793 | 14050.521 |\\n| Scissorhands | 70% | 0.183 | 21.705 | 13954.693 |\\n| LSH-E | 90% | 0.185 | 22.873 | 21229.230 |\\n| L2 | 90% | 0.186 | 24.010 | 21305.693 |\\n| H$_2$O | 90% | 0.181 | 21.665 | 14007.697 |\\n| Scissorhands | 90% | 0.182 | 21.411 | 14025.440 |\\n| Fastgen | Attention recovery frac 70% | 0.129 | 12.752 | 1171.069 |\\n| Fastgen | Attention recovery frac 75% | 0.174 | 12.291 | 1157.987 |\\n| Fastgen | Attention recovery frac 80% | 0.184 | 11.850 | 1142.679 |\\n| Fastgen | Attention recovery frac 85% | 0.183 | 11.658 | 1164.689 |\\n\\n### Question 1\\n> Does this method work well with quantization (KIVI, AWQ)? \\n\\nLSH-E will work with quantization. Additionally LSH and Simhash can also be used as a quantization method. Although we did not experiment with combining LSH-E and quantization, we think it will be a good inclusion in a future work. \\n\\n### Question 2\\n> How long does LSH-E increase first token latency?\\n\\nWhile we don't have specific numbers on Time-to-first-token (TTFT), our throughput results in Table 4 show that LSH-E is much faster at the pre-fill stage compared to attention-accumulation methods such as H$_2$O and Scissorhands and is on par with L2. Thus, the time to first token latency should be smaller than H$_2$O, Scissorhands and Fastgen and similar to that of L2.\\n\\n---\\nThank you for your review. If we have addressed your questions, we would appreciate it if you would consider updating your score. If any other questions or concerns remain, please let us know.\\n\\n### References\\n[1] Zhang, Z., Sheng, Y., Zhou, T., Chen, T., Zheng, L., Cai, R., ... & Chen, B. (2023). H2o: Heavy-hitter oracle for efficient generative inference of large language models.\\n\\n[2] Liu, Z., Desai, A., Liao, F., Wang, W., Xie, V., Xu, Z., ... & Shrivastava, A. (2024). Scissorhands: Exploiting the persistence of importance hypothesis for llm kv cache compression at test time.\\n\\n[3] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model tells you what to discard: Adaptive kv cache compression for llms.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe greatly appreciate your feedback. We have addressed your questions and concerns in our rebuttal. Please let us know if you have any further comments.\\n\\nThank you,\"}", "{\"title\": \"Rebuttal by Authors Pt 1\", \"comment\": \"We thank the reviewer for the valuable feedback and suggestions. Below, we address all stated weaknesses and questions. Given that a score of \\\"1\\\" is typically reserved for a work which is severely technically flawed or extremely incremental, if you believe that we have addressed your concerns, we would appreciate if the reviewer would be willing to reassess their score. We are more than happy to discuss any further concerns.\\n\\n### Weakness 1\\n\\n> The term \\\"novel\\\" should not be used for LSH in this context, as it is not a new approach and has appeared in prior work. Specifically, the methods used in KDEformer, Hyperattention, QJL, and SubGen demonstrate significant overlap, yet these works are not cited here, despite their relevance.\\n\\nRespectfully, we disagree that these methods demonstrate significant overlap with our approach as it appears that only SubGen [5] is a token eviction strategy. It relies on reducing the cache by instead clustering embedding and choosing representatives from key clusters to process attention. It appears as though it must initially view all embeddings to perform this clustering, which is suitable for the CPU, but would result in VRAM blowup on the GPU for long enough context. In contrast, our approach simply looks at portion of the context within the memory budget to form an initial eviction and then proceeds token-by-token to swap embeddings in and out of the cache based purely on Hamming distances.\\n\\nHyperattention [2], QJL [3], and KDEFormer [4] are using LSH to approximate the attention module $A$, apparently without token eviction, which is instead in the vein of works descending from Reformer [1]. However, all methods do appreciate memory-reductive effects, so we appreciate the reviewer pointing us towards this literature which have now been included in our related works discussion under \\\"Memory-Efficient Transformers.\\\"\\n\\n### Weakness 2 - 4\\n> The experimental setup lacks comprehensiveness; comparisons with alternative methods like H2O, SubGen, and other established baselines should be included to provide a more robust evaluation.\\n\\n> The datasets used in the experiments are not sufficiently large for evaluating performance in long-context scenarios. Given that these methods target long-sequence processing, experiments should ideally use token sizes over 50,000. LongBench or other large-scale datasets would be more appropriate for a thorough evaluation.\\n\\n> Additionally, runtime metrics should be reported to assess the efficiency of token generation and to substantiate the computational benefits claimed in the paper.\\n\\nThank you for your suggestion. We have added comparisons to several other well-cited KV cache compression strategies as baselines: H$_2$O [6], ScissorHands [7], and FastGen [9]. We have updated existing experiments in the paper to include these new baselines.\\n\\nWe also included two additional tasks from LongBench [8]: MultiNews and GovReport. Both are long-context summarization tasks, since this task type was missing from our suite of evaluations. Additionally we have added pre-fill and decoding speed metrics on the LongBench MultiNews dataset. \\n\\nOur new results show that LSH-E performs comparably to H2O and Scissorhands, and outperforms L2 and Fastgen on free form question answering tasks. In the two new summarization tasks, LSH-E consistently demonstrates comparable or superior Rouge L scores across various cache budgets. In the MultiNews summarization task, LSH-E achieves higher Rouge L score at most cache budgets, outperforming all baselines, demonstrating LSH-E\\u2019s robustness and effectiveness in handling very large context lengths. LSH-E is also faster: our pre-fill is **1.5-2x as fast** as attention-dependent methods like H2O and Scissorhands, and **17x as fast** compared to FastGen. At the decoding stage LSH-E is also comparable to L2 and faster than the other baseline methods. Please see table below for more details.\\n\\n\\n### Question 2\\n> Given that LSH-E\\u2019s efficiency largely depends on its CUDA implementation, can you elaborate on any specific optimizations made within the CUDA code?\\n\\nDespite that we have not used any CUDA optimization yet, LSH-E is already demonstrating comparable and even superior computational speed and memory efficiency compared to baseline methods. If we use actual bits for the LSH hash code, we can reduce the memory overhead of LSH-E by a factor of 8. We also expect faster hamming distance computation, thus increasing the throughput of LSH-E further. \\n\\n### Question 3\\n> Could you clarify how LSH-E handles multi-head attention? Specifically, is each head processed separately with its own LSH compression, or is there a shared mechanism across heads?\\n\\nEach head maintains its own LSH hash table and processes its own LSH compression and eviction.\"}", "{\"comment\": \"Thank you for your continued feedback and discussion! If you believe that we have addressed most of your concerns and questions (such as sink ablations and intuitive explanation of the greedy approach via the Scissorhands Importance Persistence Hypothesis), please do consider re-assessing your score, otherwise if you leave further remarks we can address them during the author response period.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Rebuttal by Authors Pt 2\", \"comment\": \"## References\\n\\n[1] Zhang, Z., Sheng, Y., Zhou, T., Chen, T., Zheng, L., Cai, R., ... & Chen, B. (2023). H2o: Heavy-hitter oracle for efficient generative inference [...].\\n\\n[2] Liu, Z., Desai, A., Liao, F., Wang, W., Xie, V., Xu, Z., ... & Shrivastava, A. (2024). Scissorhands: Exploiting the persistence [...]\\n\\n[3] Bai, Y., Lv, X., Zhang, J., Lyu, H., Tang, J., Huang, Z., ... & Li, J. (2023). Longbench: A bilingual, multitask benchmark for long context understanding.\\n\\n[4] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model tells you what to discard: Adaptive kv cache compression for llms.\\n\\n[5] Kitaev, N., Kaiser, \\u0141., & Levskaya, A. (2020). Reformer: The efficient transformer.\\n\\n[6] Zandieh, A., Daliri, M., & Han, I. (2024). QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead.\\n\\n[7] Han, I., Jayaram, R., Karbasi, A., Mirrokni, V., Woodruff, D. P., & Zandieh, A. (2023). Hyperattention: Long-context attention in near-linear time.\\n\\n[8] Zandieh, A., Han, I., Daliri, M., & Karbasi, A. (2023, July). Kdeformer: Accelerating transformers via kernel density estimation.\\n\\n[9] Zandieh, A., Han, I., Mirrokni, V., & Karbasi, A. (2024). SubGen: Token Generation in Sublinear Time and Memory.\"}", "{\"title\": \"Summary of Rebuttal Pt 4\", \"comment\": \"# Final Comment\\nWe greatly appreciate reviewer feedback. Our rebuttal addresses all questions and concerns. We would appreciate it if the reviewers could update their scores accordingly. Please let us know if you have more comments or questions.\\n\\n# References\\n\\n[1] Zhang, Z., Sheng, Y., Zhou, T., Chen, T., Zheng, L., Cai, R., ... & Chen, B. (2023). H2o: Heavy-hitter oracle for efficient generative inference of large language models.\\n\\n[2] Liu, Z., Desai, A., Liao, F., Wang, W., Xie, V., Xu, Z., ... & Shrivastava, A. (2024). Scissorhands: Exploiting the persistence of importance hypothesis for llm kv cache compression at test time. \\n\\n[3] Xiao, G., Tian, Y., Chen, B., Han, S., & Lewis, M. (2023). Efficient streaming language models with attention sinks.\\n\\n[4] Bai, Y., Lv, X., Zhang, J., Lyu, H., Tang, J., Huang, Z., ... & Li, J. (2023). Longbench: A bilingual, multitask benchmark for long context understanding.\\n\\n[5] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model tells you what to discard: Adaptive kv cache compression for llms.\\n\\n[6] Devoto, A., Zhao, Y., Scardapane, S., & Minervini, P. (2024). A Simple and Effective $ L_2 $ Norm-Based Strategy for KV Cache Compression. arXiv preprint arXiv:2406.11430.\\n\\n[7] Bai, Y., Lv, X., Zhang, J., Lyu, H., Tang, J., Huang, Z., Du, Z., Liu, X., Zeng, A., Hou, L. and Dong, Y. (2023). Longbench: A bilingual, multitask benchmark for long context understanding.\\n\\n[8] Charikar, M. S. (2002, May). Similarity estimation techniques from rounding algorithms. \\n\\n[9] Kitaev, N., Kaiser, \\u0141., & Levskaya, A. (2020). Reformer: The efficient transformer.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe greatly appreciate your feedback. We have addressed your questions and concerns in our rebuttal. Please let us know if you have any further comments.\\n\\nThank you.\"}", "{\"comment\": \"I disagree with the measured throughput. Even if H2O requires accumulating attention scores, a 40% decrease in performance is impossible.\\n\\nPersonally, I guess the difference may lie in full attention can use flash-attn, but H2O does not. \\n\\nHowever, other than this, I think most concerns are addressed. The typos are fixed and the baselines are presented. I decide to raise my score to 6.\"}", "{\"title\": \"Rebuttal by Authors Pt 1\", \"comment\": \"We thank the reviewer for the valuable feedback and suggestions. We are encouraged to see reviewer HtGz finds our approach computationally very affordable and appreciates our elimination of data transfer between the CPU and GPU. Below, we address all stated weaknesses and questions.\\n\\n### Weakness 1 & Question 1\\n> Under \\\"Configuration and Setup\\\", it is mentioned that you \\\"keep the most recent 10 tokens and the first 4 tokens of the prompt always in the KV cache.\\\" Is the L2 eviction baseline also configured this way?\\n\\nYes all other baselines are also configured in the same way. \\n\\n> It would be helpful to have an ablation study of LSH-E's performance with different numbers of first and recent tokens cached.\\n\\nWe have conducted ablation studies allowing/disallowing sink tokens and recent tokens. H$_2$O [1] (see Section 5.3 Q4) and Scissorhands [2] (see Section 4.1 \\\"approach\\\") also retain recent tokens sinks and determine these strategies are essential for full performance. We find a similar trend, as shown in the tables below. In fact, the cold-compress library turns this setting on by default due to the documented necessity of this strategy. Specifically, regardless of eviction strategy, the first 4 tokens of the prompt (the sinks according to [3]) are kept, and the 10 most recent tokens during every step of decoding are maintained.\\n\\nWe believe this ablation study not only validates the necessity of maintaining these tokens for optimal performance but also aligns LSH-E\\u2019s configuration with standard practices in competing methods like H2O and Scissorhands. We hope that the ablation results strengthen the empirical foundation of our method, demonstrating that these design choices are essential and justified.\\n\\n#### Ablation of Attention Sink Tokens and Recent Tokens on GSM8K Free Response Question Answering\\n| Strategy | Cache Budget (%) | BertScore F1 | Rouge L | ChatGPT as a Judge Avg |\\n|---|---|---|---|---|\\n| LSH-E | 30% | 0.873 | 0.341 | 3.173 |\\n| LSH-E no sink & recent | 30% | 0.652 | 0.048 | 1.028 |\\n| L2 | 30% | 0.865 | 0.288 | 1.880 |\\n| L2 no sink & recent | 30% | 0.844 | 0.228 | 1.270 |\\n| LSH-E | 50% | 0.880 | 0.393 | 4.110 |\\n| LSH-E no sink & recent | 50% | 0.777 | 0.173 | 1.513 |\\n| L2 | 50% | 0.875 | 0.355 | 2.936 |\\n| L2 no sink & recent | 50% | 0.866 | 0.318 | 2.217 |\\n| LSH-E | 70% | 0.881 | 0.401 | 4.313 |\\n| LSH-E no sink & recent | 70% | 0.841 | 0.295 | 2.687 |\\n| L2 | 70% | 0.879 | 0.386 | 3.689 |\\n| L2 no sink & recent | 70% | 0.876 | 0.374 | 3.390 |\\n| LSH-E | 90% | 0.881 | 0.403 | 4.388 |\\n| LSH-E no sink & recent | 90% | 0.868 | 0.363 | 3.630 |\\n| L2 | 90% | 0.881 | 0.400 | 4.208 |\\n| L2 no sink & recent | 90% | 0.880 | 0.397 | 4.110 |\\n\\n### Weakness 2 , 3 & Question 3\\n\\n>The benchmarks seem limited; there are only two datasets per task [..].\\n\\n> Evaluation does not include end-to-end speedup numbers [...]\\n\\n> Is it possible to perform more evaluations on LongBench tasks?\\n\\nThanks for this suggestion. We have added two LongBench [1] summarization tasks: MultiNews and GovReport. Additionally, we have added several other well-cited KV cache compression strategies: FastGen [2], H$_2$O [2], and ScissorHands [3]. We have updated existing experiments in the paper to include these new baselines. We also provide results of the two summarization tasks below. \\n\\nIn these new experiments, LSH-E consistently demonstrats comparable or superior Rouge L scores across various cache budgets. In the MultiNews summarization task, LSH-E achieves higher Rouge L score at most cache budgets, outperforming all baselines, demonstrating LSH-E\\u2019s robustness and effectiveness in handling very large context lengths.\\n\\nWe also report decoding and pre-fill tokens per second results on the LongBench MultiNews task. LSH-E is 1.5-2x as fast as H2O and Scissorhands, and 17x as fast as FastGen at the pre-fill stage. Even without low-level optimizations (e.g., expressing hash tables in binary bits), LSH-E proved to be as fast as the L2 strategy in decoding and significantly faster than attention-based baselines. This speedup was achieved while maintaining competitive quality metrics, demonstrating the computational efficiency of LSH-E. Please see the table below for details.\"}", "{\"title\": \"Response to rebuttals\", \"comment\": \"I thank the authors for a thorough response, especially the new benchmarks and ablation studies. I have a few more suggestions and thoughts:\\n1. The strength of LSH-E is that it pushes the Pareto frontier for the quality-throughput tradeoff. I suggest displaying a plot that clearly illustrates this.\\n2. Please include the sink-recent token ablation results in the paper. It would also be helpful to see \\\"full attention\\\" and \\\"only sink and recent tokens\\\" rows, as well as separate \\\"with sink\\\" and \\\"with recent tokens\\\" variants of L2 and LSH-E.\\n3. While the empirical results are intriguing, it still concerns me that there is no theoretical or intuitive explanation why the greedy approach is expected to work. This is even more concerning after seeing that LSH-E no sink performs very poorly but LSH-E with sink performs much better than L2. Maybe it would be help to show the individual GPT judging criteria (coherence, faithfulness, helpfulness) and examples that demonstrate that the individual criteria scores are reasonable. Perhaps this can provide some insights as to why the greedy approach works. For example, maybe the greedy approach leads to the premature eviction of most recent tokens while forcing the data structure to keep these tokens for a few iterations has a regularizing effect.\"}", "{\"title\": \"Clarify for several problems\", \"comment\": \">> However, the size of the KV cache scales quadratically with sequence length n and linearly with\\nthe number of attention layers and heads. (Line 38-39)\\n\\nThis is not true\\u2014the size of the KV cache scales linearly with sequence length n.\\n\\n>> For example, maintaining the KV cache for a sequence of 4K tokens in half-precision (FP16) can require approximately \\u223c16GB\\nof memory for most models within the Llama 3 family (Dubey et al., 2024). (Line 42-43)\\n\\nThis is also not true. 4K context length only occupies 500MB for Llama3-8B or 1.25GB for Llama3-70B or 2GB for Llama3-405B. \\n\\nUsually, when the context size is not very large, the majority of time is spent on MLP instead of attention. Typically, the boundary lies between 16 K and 32 K, depending on the model arch and GPUs. \\n\\nTo make sure readers well understand the technique presented in the paper, I will ask for \\n- 1. The average context length of the benchmark tested, especially the benchmark with the longest average context lengths. \\n- 2. The GPU used in the experiments (and the framework, e.g., TensorRT-LLM, vLLM, SGLang, MLC-LLM, or native pytorch/Jax).\\n- 3. Explain the two problems I mentioned above. For example, it can be a typo and how to modify them in the future version (I know the PDF deadline has passed). Other clarification (e.g., I (the reviewer) could also be wrong) is also acceptable.\"}", "{\"title\": \"Response from authors pt 2\", \"comment\": \"> While the empirical results are intriguing, it still concerns me that there is no theoretical or intuitive explanation why the greedy approach is expected to work. This is even more concerning after seeing that LSH-E no sink performs very poorly but LSH-E with sink performs much better than L2.\\nMaybe it would be help to show the individual GPT judging criteria (coherence, faithfulness, helpfulness) and examples that demonstrate that the individual criteria scores are reasonable. Perhaps this can provide some insights as to why the greedy approach works. For example, maybe the greedy approach leads to the premature eviction of most recent tokens while forcing the data structure to keep these tokens for a few iterations has a regularizing effect.\\n\\nIn regards to variations in performance with and without the sink, we point the reviewer towards [1], which empirically examines that the sink registers significant attention regardless of layer, head, and decoding step. Methods such as H$_2$O and the cold-compress library retains the sink by default in response to this observation. Although the sink is important for high performance, in our opinion it is not related to the success of the greedy eviction approach. \\n\\n\\nWe provide an informal \\\"proof sketch\\\" on the error of LSH-E, that is, how much the compressed KV cache per our strategy deviated from the uncompressed cache. We leverage \\\"The Persistence of Importance Hypothesis\\\" first suggested and observed in the well-cited Scissorhands paper (Liu et al., 2023) [2]. Tokens which are \\\"influential\\\" at one timestep (i.e., produce a high attention score with the current token), tend to produce high attention for later steps. Interestingly, the authors, like us, use the inverse of the hypothesis to inform token dropping: tokens with low attention scores should be dropped as they will not be influential later. \\n\\nAssume the hypothesis is true and that our LSH attention estimation is exact.$^*$ Then Theorem 4.1 of [2] can be directly applied to LSH-E, which assumes a single token is dropped each timestep: *\\\"Notice that when $m = 1$, i.e., in each iteration, we drop one token with the lowest score, the cache will always maintain $B$ tokens. If the ranking of the attention scores does not change in each iteration, Algorithm 2 will always drop tokens with the smallest attention scores.\\\"* Per this theorem, the upper bound on attention loss error of LSH-E scales directly with the imposed budget $B$, i.e., it decreases with larger budget. \\n\\n$^*$Our LSH estimation is not exact. The error is probabilistically controlled by the LSH dimension. Assuming a new, independently generated Gaussian projection is used at each timestep for the LSH, the probability of the LSH being correct is independent for each step, and thus multiplicative. Consequently, the user sets the sketch length sufficiently large to achieve a desired confidence $\\\\delta$. Typical to sketching theory, the guarantees are typically far more aggressive than what is practically achievable: we use modest, fixed sketch dimension of 16 and do not refresh the Gaussian sketch/projection -- we simply maintain the existing hash codes in our dictionary and add new ones with the same sketch.\\n\\nBoth the Scissorhands estimator and our LSH estimator are inexact (which both use restricted context windows per available memory), but the error in both cases seemingly does not significantly impact language output. Since Scissorhands fully computes attention scores over its window, it tends to survive quality at very high compression, while ours trades increased error at higher compression rates for dramatically improved throughput -- **our estimator is far faster.**\\n\\n\\n\\n## Reference\\n[1] Xiao, G., Tian, Y., Chen, B., Han, S., & Lewis, M. (2023). Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453.\\n\\n[2] Liu, Z., Desai, A., Liao, F., Wang, W., Xie, V., Xu, Z., ... & Shrivastava, A. (2024). Scissorhands: Exploiting the persistence of importance hypothesis for llm kv cache compression at test time. Advances in Neural Information Processing Systems, 36.\"}", "{\"title\": \"Rebuttal by Authors Pt 1\", \"comment\": \"We thank the reviewer for the valuable feedback and suggestions. We are encouraged that you find our work simple, clear and well-motivated. Below, we address all stated weaknesses and questions.\\n\\n### Weakness 1\\n> Important related studies and baselines are missing: Singhania, P., Singh, S., He, S., Feizi, S., & Bhatele, A. (2024). Loki: Low-Rank Keys for Efficient Sparse Attention. arXiv preprint arXiv:2406.02542. Tang, J., Zhao, Y., Zhu, K., Xiao, G., Kasikci, B., & Han, S. (2024). Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference. arXiv preprint arXiv:2406.10774.\\n\\nThank you for suggesting these baselines. Respectfully, we disagree that these methods demonstrate significant overlap with our approach as they are attention efficiency / approximation methods, rather than KV cache eviction strategies. \\n\\n### Weakness 2 & 4\\n> The key measures of the targeted task should be have more accurate inference with lower memory footprint and latency. I do not agree with the methodology of not comparing with other \\\"non attention-free\\\" methods.\\n\\n> The execution time of the proposed system is missing.\\n\\nThank you for your suggestion. We have added comparisons to several other well-cited KV cache compression strategies as baselines: H2O [1], ScissorHands [2], and FastGen [3]. We have updated existing experiments in the paper to include these new baselines.\\n\\nWe also included two additional tasks from LongBench [4]: MultiNews and GovReport. Both are long-context summarization tasks, since this task type was missing from our suite of evaluations. Additionally we have added pre-fill and decoding speed metrics on the LongBench MultiNews dataset.\\n\\nOur new results show that LSH-E performs comparably to H2O and Scissorhands, and outperforms L2 and Fastgen on free form question answering tasks. In the two new summarization tasks, LSH-E consistently demonstrates comparable or superior Rouge L scores across various cache budgets. In the MultiNews summarization task, LSH-E achieves higher Rouge L score at most cache budgets, outperforming all baselines, demonstrating LSH-E\\u2019s robustness and effectiveness in handling very large context lengths. LSH-E is also faster: our pre-fill stage is 1.5-2x as fast as attention-dependent methods like H2O and Scissorhands, and 17x as fast compared to FastGen. At the decoding stage LSH-E is also comparable to L2 and faster than the other baseline methods. Please see table below for more details.\\n\\n### Table Results of LongBench GovReport and MultiNews Summarization with Throughput\\n| | | GovReport | MultiNews | | |\\n|---|---|---|---|---|---|\\n| Strategy | Cache Budget | Rouge L | Rouge L | Decode Toks Per Sec | Prefill Toks Per Sec |\\n| Full | 100% | 0.230 | 0.192 | 16.071 | 16573.492 |\\n| LSH-E | 30% | 0.202 | 0.180 | 22.880 | 20293.524 |\\n| L2 | 30% | 0.201 | 0.165 | 23.981 | 20628.160 |\\n| H2O | 30% | 0.219 | 0.175 | 21.555 | 13025.776 |\\n| Scissorhands | 30% | 0.214 | 0.175 | 21.448 | 13004.254 |\\n| LSH-E | 50% | 0.217 | 0.186 | 22.846 | 20459.961 |\\n| L2 | 50% | 0.214 | 0.174 | 16.013 | 15851.952 |\\n| H2O | 50% | 0.225 | 0.181 | 21.973 | 13969.985 |\\n| Scissorhands | 50% | 0.219 | 0.182 | 20.978 | 13549.967 |\\n| LSH-E | 70% | 0.223 | 0.187 | 22.914 | 21002.334 |\\n| L2 | 70% | 0.223 | 0.187 | 24.305 | 21303.763 |\\n| H2O | 70% | 0.229 | 0.184 | 21.793 | 14050.521 |\\n| Scissorhands | 70% | 0.226 | 0.183 | 21.705 | 13954.693 |\\n| LSH-E | 90% | 0.228 | 0.185 | 22.873 | 21229.230 |\\n| L2 | 90% | 0.230 | 0.186 | 24.010 | 21305.693 |\\n| H2O | 90% | 0.227 | 0.181 | 21.665 | 14007.697 |\\n| Scissorhands | 90% | 0.230 | 0.182 | 21.411 | 14025.440 |\\n| Fastgen | Attention recovery frac 70% | 0.192 | 0.129 | 12.752 | 1171.069 |\\n| Fastgen | Attention recovery frac 75% | 0.231 | 0.174 | 12.291 | 1157.987 |\\n| Fastgen | Attention recovery frac 80% | 0.232 | 0.184 | 11.850 | 1142.679 |\\n| Fastgen | Attention recovery frac 85% | 0.236 | 0.183 | 11.658 | 1164.689 |\\n\\n### Weakness 3\\n> The presentation of experiments need to be improved: Lack of discussions and intuitions in the experiment analysis. For example, why does LSH-E outperform Full in Figure 4a; why does LSH-E become worse than L2 after 50% cache budget in Figure 4b? We have many subsubsections in the experiments, but most contents in those are barely text illustration of the figure and result while no discussion of why we would have those results.\\n\\nThank you for your suggestion. We will update the paper to include more analysis and discussions of experiment results. \\nKV cache eviction strategies sometimes perform better than using full cache because the evicted token is not always useful. Evicting useless tokens could actually help with the language quality of generated answers.\"}", "{\"summary\": \"This paper presents new methods to accelerate inference of auto-regressive transformers used in most modern-day decoder-based LLM architectures. Indeed, the main drawback of existing systems is the size of the \\\"KV Cache\\\" or Key-Value Cache which is used during the attention mechanism. To speed up the attention calculation, most systems have a cache which remembers the keys and values of commonly used tokens, to avoid recomputing it for each token decoding. However ,such a cache, for it to be performant at inference time, must scale quadratically with the sequence length, and linear in number of layers and attention heads.\\n\\n(Authors: please explain why for the uninformed reader -- this is stated in the intro, but without explanation)\\n\\nIn this paper, the authors present an LSH based method to evict far-away key tokens. Indeed, suppose we have an LSH which gets a binary encoding of any vector using random hyperplane projection method (SIMHASH). \\nThen, we can first pre-process and compute the hamming distance between query token and all key tokens, and evict the farthest one, as this is the one least likely to affect the overall attention soft-max operation.\\n\\nThey implement this simple scheme and provide a range of quality vs cache size metrics comparing with one other KV-cache called L2-Dropout Cache, which drops the keys based on their magnitudes.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Studies an important problem of much significance in todays LLM era.\\n\\nPresents a simple yet elegant approach\\n\\nDoes good evaluations on a range of use-cases\", \"weaknesses\": \"Why is there no timing experiment, since that will be one key benefit of caching.\\n\\nWhy only restrict to attention-free cache policies and specifically only compare with the L2-dropout baseline?\\n\\nConceptually, what is the key difference with Reformer? I have not read that paper but you mention in passing that it is using LSH and simhash also. Is which cells to evaluate vs what to evict the only difference between Reformer and your work? If so, worth comparing with Reformer also in plots?\\n\\nWhat is the rationale of the policy? Why can't a token just evicted become relevant again? I guess is there some language-based \\\"locality of reference\\\"?\\n\\nDo ablation of the hardcoded bits, i.e., you mention you hard-cache the first few and last few tokens. What is the contribution of this to your overall success metrics?\\n\\nThe eviction policy is not clearly understandable in how it aggregates the hamming distances over time steps. Is it only based on the most recent time step, or some more complex rule?\", \"questions\": \"Line 52: \\\"However, this L2 dropout strategy only performs well on\\nlong-context retrieval tasks. It is specialized to retain only those tokens with the highest attention\\\" -- be more specific. Why is this?\", \"line_57\": \"\\\"wide variety of tasks?\\\" -- how do you define this?\", \"line_145\": \"Formally for our setup, distd(x, y) cos \\u03b8x,y, here it is more a measure of cosine similarity than distance. Misleading, perhaps?\", \"line_419\": \"did you mean \\\"LSH dimension does significantly impact performance\\\" --> does not?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the valuable feedback and suggestions. Below, we address all stated weaknesses and questions.\\n\\n> However ,such a cache, for it to be performant at inference time, must scale quadratically with the sequence length, and linear in number of layers and attention heads.\\n> (Authors: please explain why for the uninformed reader -- this is stated in the intro, but without explanation)\\n\\nAttention-based eviction strategies like H2O and Scissorhands needs to accumulate attention of each token in the KV cache. Assuming the size of the KV cache is N tokens, for each decoded token, N attention scores need to be added which requires $\\\\mathcal{O}(N^2)$ computation (all pairwise dot-products) and storage ($N^2$ entries). Therefore the time complexity of maintaining accumulated attention of tokens is approximately O(N^2) or quadratic to the sequence length because N is a percentage of the max sequence length. \\n\\n### Weakness 1 & 2\\n> Why only restrict to attention-free cache policies and specifically only compare with the L2-dropout baseline?\\n\\n> Why is there no timing experiment, since that will be one key benefit of caching.\\n\\nThanks for the suggestion. We have added comparisons to three more baselines: H2O, Scissorhands, and FastGen to contextualize LSH-E's performance against state-of-the-art methods. We have updated existing experiments in the paper to include these new baselines. Additionally, we added two long-context summarization tasks from the LongBench benchmarks: MultiNews and GovReport, and report results in the table below.\\n\\nIn these new experiments, LSH-E consistently demonstrats comparable or superior Rouge L scores across various cache budgets. In the MultiNews summarization task, LSH-E achieves higher Rouge L score at most cache budgets, outperforming all baselines, demonstrating LSH-E\\u2019s robustness and effectiveness in handling very large context lengths.\\n\\nWe also added timing experiments and report throughput metrics. We provide decoding and pre-fill tokens per second results on the LongBench MultiNews task. LSH-E is 1.5-2x as fast as H2O and Scissorhands, and 17x as fast as FastGen at the pre-fill stage. Even without low-level optimizations (e.g., expressing hash tables in binary bits), LSH-E proved to be as fast as the L2 strategy in decoding and significantly faster than attention-based baselines. This speedup was achieved while maintaining competitive quality metrics, demonstrating the computational efficiency of LSH-E. \\n\\n### Table: Results of LongBench GovReport and MultiNews Summarization with Throughput\\n| | | GovReport | MultiNews | | |\\n|---|---|---|---|---|---|\\n| Strategy | Cache Budget | Rouge L | Rouge L | Decode Toks Per Sec | Prefill Toks Per Sec |\\n| Full | 100% | 0.230 | 0.192 | 16.071 | 16573.492 |\\n| LSH-E | 30% | 0.202 | 0.180 | 22.880 | 20293.524 |\\n| L2 | 30% | 0.201 | 0.165 | 23.981 | 20628.160 |\\n| H2O | 30% | 0.219 | 0.175 | 21.555 | 13025.776 |\\n| Scissorhands | 30% | 0.214 | 0.175 | 21.448 | 13004.254 |\\n| LSH-E | 50% | 0.217 | 0.186 | 22.846 | 20459.961 |\\n| L2 | 50% | 0.214 | 0.174 | 16.013 | 15851.952 |\\n| H2O | 50% | 0.225 | 0.181 | 21.973 | 13969.985 |\\n| Scissorhands | 50% | 0.219 | 0.182 | 20.978 | 13549.967 |\\n| LSH-E | 70% | 0.223 | 0.187 | 22.914 | 21002.334 |\\n| L2 | 70% | 0.223 | 0.187 | 24.305 | 21303.763 |\\n| H2O | 70% | 0.229 | 0.184 | 21.793 | 14050.521 |\\n| Scissorhands | 70% | 0.226 | 0.183 | 21.705 | 13954.693 |\\n| LSH-E | 90% | 0.228 | 0.185 | 22.873 | 21229.230 |\\n| L2 | 90% | 0.230 | 0.186 | 24.010 | 21305.693 |\\n| H2O | 90% | 0.227 | 0.181 | 21.665 | 14007.697 |\\n| Scissorhands | 90% | 0.230 | 0.182 | 21.411 | 14025.440 |\\n| Fastgen | Attention recovery frac 70% | 0.192 | 0.129 | 12.752 | 1171.069 |\\n| Fastgen | Attention recovery frac 75% | 0.231 | 0.174 | 12.291 | 1157.987 |\\n| Fastgen | Attention recovery frac 80% | 0.232 | 0.184 | 11.850 | 1142.679 |\\n| Fastgen | Attention recovery frac 85% | 0.236 | 0.183 | 11.658 | 1164.689 |\\n\\n### Weakness 3\\n> Conceptually, what is the key difference with Reformer? I have not read that paper but you mention in passing that it is using LSH and simhash also. Is which cells to evaluate vs what to evict the only difference between Reformer and your work? If so, worth comparing with Reformer also in plots?\\n\\nThanks for the question. We clarified the conceptual distinctions between LSH-E and related works such as Reformer, H2O, and SubGen and updated the related works section of our paper.\\n\\nThe biggest difference is that Reformer is an efficient attention replacement rather than a kv cache eviction strategy. Reformer and our work use the same tools but for different purposes and to achieve different goals. Reformer uses LSH and simhash to group tokens that are similar into buckets, and restrict attention computation to tokens within the same bucket for efficiency of computation. Our work uses LSH to find the least similar tokens in history and evict them from the KV cache for efficiency of memory usage.\", \"title\": \"Rebuttal by Authors Pt 1\"}", "{\"title\": \"Response from authors pt1\", \"comment\": \"We would like to thank the reviewer for the constructive feedback. Below, we address all stated suggestions and questions.\\n\\n> The strength of LSH-E is that it pushes the Pareto frontier for the quality-throughput tradeoff. I suggest displaying a plot that clearly illustrates this.\\n\\nThank you for recognizing the strength of LSH-E. We will include such a plot in the next revision of the paper.\\n\\n> Please include the sink-recent token ablation results in the paper. It would also be helpful to see \\\"full attention\\\" and \\\"only sink and recent tokens\\\" rows, as well as separate \\\"with sink\\\" and \\\"with recent tokens\\\" variants of L2 and LSH-E.\\n\\nThanks for the suggestion. We performed additional ablations including using only sink and recent tokens as a strategy, and L2 and LSH-E with only sink tokens, and with only recent tokens. In this ablation the LSH dimension was set to 16 bits. The number of sink tokens is 4 and the number of recent tokens is 10 except for the pure Sink & Recent strategy, which keeps (cache_size - 4) most recent tokens. Please see the updated table below for details. We will include the results of the sink-recent ablation in the next revision of the paper.\\n\\n### Table: Ablation of Attention Sink Tokens and Recent Tokens on GSM8K Free Response Question Answering\\n| Cache Budget | Strategy | Bert F1 | Rouge L | GPT Rouge | GPT Coherence | GPT Faithfulness | GPT Helpfulness |\\n|---|---|---|---|---|---|---|---|\\n| 10% | LSH-E | 0.831 | 0.157 | 1.018 | 1.387 | 1.147 | 1.083 |\\n| 10% | LSH-E no sink no recent | 0.708 | 0.025 | 1.000 | 1.000 | 1.000 | 1.000 |\\n| 10% | LSH-E no sink | 0.713 | 0.027 | 1.000 | 1.000 | 1.000 | 1.000 |\\n| 10% | LSH-E no recent | 0.847 | 0.189 | 1.100 | 2.002 | 1.348 | 1.326 |\\n| 10% | L2 | 0.826 | 0.151 | 1.005 | 1.293 | 1.098 | 1.033 |\\n| 10% | L2 no sink no recent | 0.804 | 0.130 | 1.000 | 1.088 | 1.030 | 1.016 |\\n| 10% | L2 no sink | 0.836 | 0.178 | 1.026 | 1.600 | 1.138 | 1.096 |\\n| 10% | L2 no recent | 0.829 | 0.171 | 1.014 | 1.394 | 1.098 | 1.032 |\\n| 10% | Sink & Recent | 0.843 | 0.176 | 1.040 | 1.882 | 1.298 | 1.248 |\\n| 30% | LSH-E | 0.873 | 0.341 | 2.520 | 3.767 | 3.216 | 3.190 |\\n| 30% | LSH-E no sink no recent | 0.744 | 0.068 | 1.004 | 1.024 | 1.018 | 1.006 |\\n| 30% | LSH-E no sink | 0.744 | 0.066 | 1.006 | 1.018 | 1.028 | 1.002 |\\n| 30% | LSH-E no recent | 0.873 | 0.342 | 2.546 | 3.956 | 3.340 | 3.472 |\\n| 30% | L2 | 0.865 | 0.288 | 1.356 | 2.428 | 1.895 | 1.841 |\\n| 30% | L2 no sink no recent | 0.844 | 0.228 | 1.040 | 1.478 | 1.292 | 1.268 |\\n| 30% | L2 no sink | 0.865 | 0.290 | 1.474 | 2.750 | 2.010 | 2.102 |\\n| 30% | L2 no recent | 0.846 | 0.238 | 1.032 | 1.478 | 1.320 | 1.272 |\\n| 30% | Sink & Recent | 0.868 | 0.310 | 1.910 | 3.432 | 2.616 | 2.682 |\\n| 50% | LSH-E | 0.880 | 0.393 | 3.457 | 4.530 | 4.212 | 4.241 |\\n| 50% | LSH-E no sink no recent | 0.803 | 0.178 | 1.322 | 1.570 | 1.696 | 1.424 |\\n| 50% | LSH-E no sink | 0.802 | 0.179 | 1.362 | 1.554 | 1.684 | 1.440 |\\n| 50% | LSH-E no recent | 0.880 | 0.399 | 3.624 | 4.638 | 4.338 | 4.446 |\\n| 50% | L2 | 0.875 | 0.355 | 2.190 | 3.494 | 3.035 | 3.027 |\\n| 50% | L2 no sink no recent | 0.866 | 0.318 | 1.548 | 2.690 | 2.320 | 2.308 |\\n| 50% | L2 no sink | 0.876 | 0.359 | 2.492 | 3.710 | 3.170 | 3.276 |\\n| 50% | L2 no recent | 0.866 | 0.319 | 1.570 | 2.686 | 2.382 | 2.336 |\\n| 50% | Sink & Recent | 0.879 | 0.385 | 3.412 | 4.488 | 4.054 | 4.122 |\\n| 70% | LSH-E | 0.881 | 0.401 | 3.734 | 4.671 | 4.404 | 4.444 |\\n| 70% | LSH-E no sink no recent | 0.847 | 0.295 | 2.350 | 2.818 | 2.912 | 2.612 |\\n| 70% | LSH-E no sink | 0.847 | 0.295 | 2.332 | 2.794 | 2.888 | 2.600 |\\n| 70% | LSH-E no recent | 0.881 | 0.402 | 3.884 | 4.790 | 4.546 | 4.650 |\\n| 70% | L2 | 0.879 | 0.386 | 2.934 | 4.184 | 3.817 | 3.820 |\\n| 70% | L2 no sink no recent | 0.876 | 0.374 | 2.684 | 3.836 | 3.510 | 3.528 |\\n| 70% | L2 no sink | 0.879 | 0.390 | 3.266 | 4.370 | 4.018 | 4.104 |\\n| 70% | L2 no recent | 0.876 | 0.374 | 2.718 | 3.842 | 3.522 | 3.516 |\\n| 70% | Sink & Recent | 0.881 | 0.401 | 3.810 | 4.720 | 4.428 | 4.508 |\\n| 90% | LSH-E | 0.881 | 0.403 | 3.837 | 4.722 | 4.468 | 4.525 |\\n| 90% | LSH-E no sink no recent | 0.868 | 0.363 | 3.222 | 3.784 | 3.826 | 3.618 |\\n| 90% | LSH-E no sink | 0.869 | 0.363 | 3.248 | 3.822 | 3.854 | 3.628 |\\n| 90% | LSH-E no recent | 0.882 | 0.406 | 4.018 | 4.788 | 4.562 | 4.650 |\\n| 90% | L2 | 0.881 | 0.400 | 3.569 | 4.578 | 4.324 | 4.361 |\\n| 90% | L2 no sink no recent | 0.880 | 0.397 | 3.460 | 4.486 | 4.210 | 4.282 |\\n| 90% | L2 no sink | 0.881 | 0.402 | 3.752 | 4.658 | 4.388 | 4.470 |\\n| 90% | L2 no recent | 0.880 | 0.397 | 3.438 | 4.482 | 4.188 | 4.238 |\\n| 90% | Sink & Recent | 0.881 | 0.405 | 4.006 | 4.792 | 4.572 | 4.644 |\\n| 100% | Full | 0.882 | 0.403 | 3.845 | 4.716 | 4.499 | 4.545 |\\n\\nFrom the results we can see that sink tokens have a bigger impact on the performance of LSH while recent tokens impact L2 more.\"}", "{\"title\": \"Rebuttal by Authors Pt 2\", \"comment\": \"### Weakness 4\\n> What is the rationale of the policy? Why can't a token just evicted become relevant again? I guess is there some language-based \\\"locality of reference\\\"?\\n\\nA token could become relevant again. But with the restriction of GPU memory and KV cache budget in mind, KV cache strategies must trade-off between the information lost and memory requirement. We have demonstrated through experimentst that LSH-E achieves good performance on multiple real-world tasks and better speed compared to attention-based eviction methods.\\n[name=Furong: talking points. 1. there is evidence certain tokens are not crucial and evicting them do not hurt performance significantly. cite. 2. empirically, our results verify this. some of your discussion above can be moved here. 3. certain tasks require more dynamically changing tokens, while certain tasks do not. we will leave it for future work when they do.]\\n\\n\\n### Weakness 5\\n> Do ablation of the hardcoded bits, i.e., you mention you hard-cache the first few and last few tokens. What is the contribution of this to your overall success metrics?\\n\\nWe have conducted ablation studies allowing/disallowing sink tokens and recent tokens. H2O [1] (see Section 5.3 Q4) and Scissorhands [2] (see Section 4.1 \\\"approach\\\") also retain recent tokens sinks and determine these strategies are essential for full performance. We find a similar trend, as shown in the tables below. In fact, the cold-compress library turns this setting on by default due to the documented necessity of this strategy. Specifically, regardless of eviction strategy, the first 4 tokens of the prompt (the sinks according to [3]) are kept, and the 10 most recent tokens during every step of decoding are maintained.\\n\\nWe believe this ablation study not only validates the necessity of maintaining these tokens for optimal performance but also aligns LSH-E\\u2019s configuration with standard practices in competing methods like H2O and Scissorhands. We hope that the ablation results strengthen the empirical foundation of our method, demonstrating that these design choices are essential and justified.\\n\\n### Table: Ablation of Attention Sink Tokens and Recent Tokens on GSM8K Free Response Question Answering\\n| Strategy | Cache Budget (%) | BertScore F1 | Rouge L | ChatGPT as a Judge Avg |\\n|---|---|---|---|---|\\n| LSH-E | 30% | 0.873 | 0.341 | 3.173 |\\n| LSH-E no sink & recent | 30% | 0.652 | 0.048 | 1.028 |\\n| L2 | 30% | 0.865 | 0.288 | 1.880 |\\n| L2 no sink & recent | 30% | 0.844 | 0.228 | 1.270 |\\n| LSH-E | 50% | 0.880 | 0.393 | 4.110 |\\n| LSH-E no sink & recent | 50% | 0.777 | 0.173 | 1.513 |\\n| L2 | 50% | 0.875 | 0.355 | 2.936 |\\n| L2 no sink & recent | 50% | 0.866 | 0.318 | 2.217 |\\n| LSH-E | 70% | 0.881 | 0.401 | 4.313 |\\n| LSH-E no sink & recent | 70% | 0.841 | 0.295 | 2.687 |\\n| L2 | 70% | 0.879 | 0.386 | 3.689 |\\n| L2 no sink & recent | 70% | 0.876 | 0.374 | 3.390 |\\n| LSH-E | 90% | 0.881 | 0.403 | 4.388 |\\n| LSH-E no sink & recent | 90% | 0.868 | 0.363 | 3.630 |\\n| L2 | 90% | 0.881 | 0.400 | 4.208 |\\n| L2 no sink & recent | 90% | 0.880 | 0.397 | 4.110 |\\n\\n### Weakness 6\\n> The eviction policy is not clearly understandable in how it aggregates the hamming distances over time steps. Is it only based on the most recent time step, or some more complex rule?\\n\\nThe Hamming distance is calculated per decoded token so it is based on the most recent time step and not aggregated over time steps. \\n\\n### Question 1\\n\\n> Line 52: \\\"However, this L2 dropout strategy only performs well on long-context retrieval tasks. It is specialized to retain only those tokens with the highest attention\\\" -- be more specific. Why is this?\\n\\nWe have updated our work to make this clearer. The L2 eviction strategy [6] was developed based on an empirical observation that smaller norm of key embedding correlates with higher attention score. For long-context retrieval tasks such Common Words, Needle-in-a-Haystack, etc., high-attention score tokens are the most important tokens since the question's text will overlap with the piece of context that needs to be retrieved. However, for generative tasks such as summarization, free response question-answering, etc., more than just high-attention tokens are required, which is why our method tends to outperform L2 on these benchmarks for most compression settings. \\n\\n### Question 2\\n> Line 57: \\\"wide variety of tasks?\\\" -- how do you define this?\\n\\nThe common task types for KV cache compression experiments include multiple-choice, free response question-answering, long-context retrieval, and summarization. In our original draft, we included two benchmarks for each task type except for summarization -- which we have now added: MultiNews and GovReport from LongBench [4].\"}", "{\"summary\": \"This paper introduces a KV cache compression method based on LSH and shows that LSH-E can achieve good downstream performance on various downstream tasks with a 30%- 70% compression ratio.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper applies novel LSH methods to KV cache problems. The motivations and reasons why LSH can produce a good performance are well discussed. Besides this, a static compression rate of 30% - 70% is also helpful for many LLM serving systems, given the accuracy is preserved.\", \"weaknesses\": \"1. There is no comparison with other static KV compression baselines, including H2O, streamingLLM, and SnapKV. If this problem is solved, I will raise my score.\\n2. Only the memory compression ratio is shown. I will ask for the wall clock speedups (latency or throughput).\", \"questions\": \"Besides the problems mentioned in Weakness,\\n1 Does this method work well with quantization (KIVI, AWQ)?\\n2 How long does LSH-E increase first token latency?\\n\\nThese two questions can be left for future work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors Pt 1\", \"comment\": \"We thank the reviewer for the valuable feedback and suggestions. We appreciate your recognition that our application of LSH approximate attention computation for eviction is efficient and a strength. Below, we address all stated weaknesses and questions.\\n\\n### Weakness 1\\n> Novelty: The novelty is limited.\\n\\nCould the reviewer expand on this? We are novel in several ways:\\n\\n 1. To the best of our knowledge, we are the only work using LSH for token eviction. Other works such as Reformer, QJL, Hyperattention, Subgen, and KDEFormer [5-9] use LSH to accelerate the attention computation but must initially view all queries, keys, and possibly the entire attention matrix, risking VRAM blowup. \\n\\n 2. We are the only attention-free token eviction strategy that makes a probabilistically-guaranteed estimation of attention (via known statistical properties of LSH). The only other comparable strategy, L2 eviction, relies on an observed correlation (low L2 norm = high attention), which may not hold for all transformer-based models and layers (see Figure 7 in our paper). \\n\\n 3. We propose a strategy which does not require construction of query/key embeddings of the entire context. We acquire the attention embeddings only for a percentage of the context that can fit within the user's memory budget and then perform token-by-token eviction for the remainder of the context. Interestingly, this approach does not appear in existing KV cache literature and we've integrated it for all other baselines. This may be regarded to an alternative strategy to contextual chunking.\\n\\n### Weakness 2\\n\\n> H2O / Scissorhands are known to not perform well on longbenchmark. Can we see some results on longbenchmark like passage retrieval datasets ?\\n\\n> Missing baselines --only baseline used is L2 norm. - Limited evaluation. can we get more results on longbenchmark at different budgets with standard baselines.\\n\\nThanks for this suggestion. We expanded the experiments to include two new tasks from the LongBench benchmarks: MultiNews and GovReport. Both are long-context summarization tasks. Both are long-context summarization tasks, since this task type was missing from our suite of evaluations.\\n\\nAdditionally, we added comparisons to well-cited KV cache compression strategies, such as H2O [1], Scissorhands [2], and FastGen [5]. We have updated existing experiments in the paper to include these new baselines. We also provide results of the two summarization tasks in tables below. \\n\\nIn these new experiments, LSH-E consistently demonstrats comparable or superior Rouge L scores across various cache budgets. In the MultiNews summarization task, LSH-E achieves higher Rouge L score at most cache budgets, outperforming all baselines, demonstrating LSH-E\\u2019s robustness and effectiveness in handling very large context lengths.\\n\\nWe also measured throughput metrics on the MultiNews summarization task. Per the throughput table below, our method performs better than the baselines on these two tasks across multiple KV cache budgets and our pre-fill speed is **1.5-2x as fast** as attention-dependent methods like H2O and Scissorhands, and even faster compared to FastGen.\\n\\n### Table: Results of LongBench GovReport and MultiNews Summarization with Throughput\\n| | | GovReport | MultiNews | | |\\n|---|---|---|---|---|---|\\n| Strategy | Cache Budget | Rouge L | Rouge L | Decode Toks Per Sec | Prefill Toks Per Sec |\\n| Full | 100% | 0.230 | 0.192 | 16.071 | 16573.492 |\\n| LSH-E | 30% | 0.202 | 0.180 | 22.880 | 20293.524 |\\n| L2 | 30% | 0.201 | 0.165 | 23.981 | 20628.160 |\\n| H2O | 30% | 0.219 | 0.175 | 21.555 | 13025.776 |\\n| Scissorhands | 30% | 0.214 | 0.175 | 21.448 | 13004.254 |\\n| LSH-E | 50% | 0.217 | 0.186 | 22.846 | 20459.961 |\\n| L2 | 50% | 0.214 | 0.174 | 16.013 | 15851.952 |\\n| H2O | 50% | 0.225 | 0.181 | 21.973 | 13969.985 |\\n| Scissorhands | 50% | 0.219 | 0.182 | 20.978 | 13549.967 |\\n| LSH-E | 70% | 0.223 | 0.187 | 22.914 | 21002.334 |\\n| L2 | 70% | 0.223 | 0.187 | 24.305 | 21303.763 |\\n| H2O | 70% | 0.229 | 0.184 | 21.793 | 14050.521 |\\n| Scissorhands | 70% | 0.226 | 0.183 | 21.705 | 13954.693 |\\n| LSH-E | 90% | 0.228 | 0.185 | 22.873 | 21229.230 |\\n| L2 | 90% | 0.230 | 0.186 | 24.010 | 21305.693 |\\n| H2O | 90% | 0.227 | 0.181 | 21.665 | 14007.697 |\\n| Scissorhands | 90% | 0.230 | 0.182 | 21.411 | 14025.440 |\\n| Fastgen | Attention recovery frac 70% | 0.192 | 0.129 | 12.752 | 1171.069 |\\n| Fastgen | Attention recovery frac 75% | 0.231 | 0.174 | 12.291 | 1157.987 |\\n| Fastgen | Attention recovery frac 80% | 0.232 | 0.184 | 11.850 | 1142.679 |\\n| Fastgen | Attention recovery frac 85% | 0.236 | 0.183 | 11.658 | 1164.689 |\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe greatly appreciate your feedback. We have addressed your questions and concerns in our rebuttal. Please let us know if you have any further comments.\\n\\nThank you.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe greatly appreciate your feedback. We have addressed your questions and concerns in our rebuttal. Please let us know if you have any further comments.\\n\\nThank you,\"}", "{\"title\": \"Summary of Rebuttal Pt 2\", \"comment\": \"## 3. Ablation Studies on Attention Sink Tokens and Recent Tokens\\nTo address concerns about hardcoding specific tokens for retention, we conducted ablation studies on the impact of retaining attention sink tokens (first 4 tokens) and recent tokens (last 10 tokens). The results revealed that disabling these features led to performance degradation. For example, at a 50% cache budget on GSM8K, LSH-E without sink and recent tokens scored a Rouge L of 0.173 compared to 0.393 with these features enabled.\\n\\nThis study not only validated the necessity of maintaining these tokens for optimal performance but also aligned LSH-E\\u2019s configuration with standard practices in competing methods like H2O and Scissorhands. We hope that the ablation results strengthened the empirical foundation of our method, demonstrating that these design choices are essential and justified.\\n\\n### Table 2: Ablation of Attention Sink Tokens and Recent Tokens on GSM8K Free Response Question Answering\\n| Strategy | Cache Budget (%) | BertScore F1 | Rouge L | ChatGPT as a Judge Avg |\\n|---|---|---|---|---|\\n| LSH-E | 30% | 0.873 | 0.341 | 3.173 |\\n| LSH-E no sink & recent | 30% | 0.652 | 0.048 | 1.028 |\\n| L2 | 30% | 0.865 | 0.288 | 1.880 |\\n| L2 no sink & recent | 30% | 0.844 | 0.228 | 1.270 |\\n| LSH-E | 50% | 0.880 | 0.393 | 4.110 |\\n| LSH-E no sink & recent | 50% | 0.777 | 0.173 | 1.513 |\\n| L2 | 50% | 0.875 | 0.355 | 2.936 |\\n| L2 no sink & recent | 50% | 0.866 | 0.318 | 2.217 |\\n| LSH-E | 70% | 0.881 | 0.401 | 4.313 |\\n| LSH-E no sink & recent | 70% | 0.841 | 0.295 | 2.687 |\\n| L2 | 70% | 0.879 | 0.386 | 3.689 |\\n| L2 no sink & recent | 70% | 0.876 | 0.374 | 3.390 |\\n| LSH-E | 90% | 0.881 | 0.403 | 4.388 |\\n| LSH-E no sink & recent | 90% | 0.868 | 0.363 | 3.630 |\\n| L2 | 90% | 0.881 | 0.400 | 4.208 |\\n| L2 no sink & recent | 90% | 0.880 | 0.397 | 4.110 |\\n\\n## 4. Attention Loss Analysis\\n\\nWe added an analysis of attention loss for LSH-E, L2 and Scissorhands, quantifying the discrepancy introduced by the eviction strategy compared to maintaining the full cache. We measured the atttention loss of each attention head and report the average. Attention loss is defined as the sum of the attention probabilities for evicted tokens. Or equivalently, 1 - the sum of the attention probabilities for the tokens in the compressed cache.\\n \\nThe attention loss was measured at 50% cache budget using prompts from the GSM8K question answering dataset. As per Table 5, all three methods have low attention loss at 50% cache budget, and LSH-E has lower attention loss compared to L2 and scissorhands, proving LSH-E's ability of keeping high attention tokens in the KV cache. \\n\\nBy quantifying attention loss, we demonstrated that LSH-E introduces minimal deviation from full-cache attention, addressing concerns about the theoretical guarantees of its quality.\\n\\n### Table 3: Attention Loss\\n| Strategy | Attention Loss |\\n|----------------|-------------------|\\n| LSH-E | 0.03357896805 |\\n| L2 | 0.03403072357 |\\n| Scissorhands | 0.04483547211 |\\n\\n\\n## 5. Clarified Novelty and Conceptual Differences\\nWe clarified the conceptual distinctions between LSH-E and related works such as Reformer, H2O, and SubGen and updated the related works section of our paper. While LSH-E uses LSH for token eviction, Reformer and similar methods use LSH to accelerate attention computation. This distinction underscores LSH-E\\u2019s novelty as a probabilistically guaranteed attention-free token eviction strategy, separating it from approaches like L2 eviction. Additionally, it does not require scanning the entirety of the context like existing approaches, which risks VRAM blowup.\\n\\nThis clarification strengthens the claim of LSH-E's novelty and highlights its practical advantages, particularly in memory-constrained scenarios.\\n\\n## 6. Improved Presentation and Addressed Minor Issues\\n\\nWe addressed several presentation issues, including improving axis captions in figures and fixing typographical errors. These changes enhanced the paper\\u2019s readability. Moreover, we expanded the discussion of results, providing intuitive explanations for observed trends, such as why LSH-E outperforms Full in certain cases and why its performance degrades at lower cache budgets.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe greatly appreciate your feedback. We have addressed your questions and concerns in our rebuttal. Please let us know if you have any further comments.\\n\\nThank you,\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you so much for the new ablation numbers! It is very interesting that it performs better without a recent tokens, and that just recent tokens and sink perform almost as well as LSH-E with sink.\"}", "{\"title\": \"Rebuttal by Authors Pt 2\", \"comment\": \"### Table: Results of LongBench GovReport and MultiNews Summarization with Throughput\\n| | | GovReport | MultiNews | | |\\n|---|---|---|---|---|---|\\n| Strategy | Cache Budget | Rouge L | Rouge L | Decode Toks Per Sec | Prefill Toks Per Sec |\\n| Full | 100% | 0.230 | 0.192 | 16.071 | 16573.492 |\\n| LSH-E | 30% | 0.202 | 0.180 | 22.880 | 20293.524 |\\n| L2 | 30% | 0.201 | 0.165 | 23.981 | 20628.160 |\\n| H2O | 30% | 0.219 | 0.175 | 21.555 | 13025.776 |\\n| Scissorhands | 30% | 0.214 | 0.175 | 21.448 | 13004.254 |\\n| LSH-E | 50% | 0.217 | 0.186 | 22.846 | 20459.961 |\\n| L2 | 50% | 0.214 | 0.174 | 16.013 | 15851.952 |\\n| H2O | 50% | 0.225 | 0.181 | 21.973 | 13969.985 |\\n| Scissorhands | 50% | 0.219 | 0.182 | 20.978 | 13549.967 |\\n| LSH-E | 70% | 0.223 | 0.187 | 22.914 | 21002.334 |\\n| L2 | 70% | 0.223 | 0.187 | 24.305 | 21303.763 |\\n| H2O | 70% | 0.229 | 0.184 | 21.793 | 14050.521 |\\n| Scissorhands | 70% | 0.226 | 0.183 | 21.705 | 13954.693 |\\n| LSH-E | 90% | 0.228 | 0.185 | 22.873 | 21229.230 |\\n| L2 | 90% | 0.230 | 0.186 | 24.010 | 21305.693 |\\n| H2O | 90% | 0.227 | 0.181 | 21.665 | 14007.697 |\\n| Scissorhands | 90% | 0.230 | 0.182 | 21.411 | 14025.440 |\\n| Fastgen | Attention recovery frac 70% | 0.192 | 0.129 | 12.752 | 1171.069 |\\n| Fastgen | Attention recovery frac 75% | 0.231 | 0.174 | 12.291 | 1157.987 |\\n| Fastgen | Attention recovery frac 80% | 0.232 | 0.184 | 11.850 | 1142.679 |\\n| Fastgen | Attention recovery frac 85% | 0.236 | 0.183 | 11.658 | 1164.689 |\\n\\n### Weakness 2\\n\\n>[...] the improvement over the baseline is not very significant in Needle-in-a-Haystack, Common Words, and MedQA Multiple Choice\\n\\nWe respectfully disagree that our improvement is not significant. We strongly believe our approach overall is a useful addition to the toolkit of compression strategies. Per the throughput table below, we are 1.5-2x faster than H$_2$O, Scissorhands, and FastGen on pre-fill processing (resulting in thousands more tokens per second) while comparable in quality metrics. We are better than all methods on MedQA question-answering and LongBench MultiNews. Compared to L2 [6], our GPT-Judge scores are noticeably higher on MedQA and GSM-8K question-answering from 0.3 - 0.9 compression in all categories (by >1 point in several cases), indicating richer responses than L2 for generative language tasks.\\n\\nWe also remind the reviewer the L2 strategy was originally designed for long-context retrieval tasks, and we are competitive against it down to 0.3 compression (at which point both methods significantly degrade). Our method defeats it at all compression rates on the LongBench MultiNews task as well. In summary, both of these zero-attention strategies, given their speed, are valuable strategies, with LSH-E preferable for text-generation tasks. \\n\\n### Weakness 4 & Question 4\\n\\n>I could not find provable theoretical guarantees about the quality of the KV cache under this greedy eviction strategy or empirical observations [...]\\n\\nWe measured and report the average atttention loss of the attention heads for both LSH-E, L2 and Scissorhands as empirical observation. Attention loss is defined as the sum of the attention probabilities for evicted tokens. Or equivalently, 1 - the sum of the attention probabilities for the tokens in the compressed cache.\\n \\nThe attention loss was measured at 50% cache budget using prompts from the GSM8K question answering dataset. As per the table below, all three methods have low attention loss at 50% cache budget, and LSH-E has lower attention loss compared to L2 and Scissorhands, proving LSH-E's ability of keeping high attention tokens in the KV cache. \\n\\n### **Table: Attention Loss**\\n| Strategy| Attention Loss |\\n|-----|------|\\n| LSH-E | 0.03357896805 |\\n| L2 | 0.03403072357 |\\n| Scissorhands | 0.04483547211 |\\n\\n----\\nThank you for your review. If we have addressed your questions, we would appreciate it if you would consider updating your score. If any other questions or concerns remain, please let us know.\\n\\n### References\\n\\n[1] Zhang, Z., Sheng, Y., Zhou, T., Chen, T., Zheng, L., Cai, R., ... & Chen, B. (2023). H2o: Heavy-hitter oracle for efficient generative inference [...].\\n\\n[2] Liu, Z., Desai, A., Liao, F., Wang, W., Xie, V., Xu, Z., ... & Shrivastava, A. (2024). Scissorhands: Exploiting the persistence [...]\\n\\n[3] Xiao, G., Tian, Y., Chen, B., Han, S., & Lewis, M. (2023). Efficient streaming language models with attention sinks.\\n\\n[4] Bai, Y., Lv, X., Zhang, J., Lyu, H., Tang, J., Huang, Z., ... & Li, J. (2023). Longbench: A bilingual, multitask benchmark for long context understanding.\\n\\n[5] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model tells you what to discard: Adaptive kv cache compression for llms.\\n\\n[6] Devoto, A., Zhao, Y., Scardapane, S., & Minervini, P. (2024). A Simple and Effective $ L_2 $ Norm-Based Strategy for KV Cache Compression. arXiv preprint arXiv:2406.11430.\"}", "{\"summary\": \"This paper proposes a method that uses LSH to perform kv cache eviction. The provided experiments show that the proposed method outperforms the baseline.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Strong Points\\n----\\nS1. The problem of the paper is well-motivated. \\n\\nS2. The proposed algorithm is simple and clear with illustrative example.\\n\\nS3. The proposed method outperforms the baseline L2.\", \"weaknesses\": \"Weak Points\\n----\\nW1. Important related studies and baselines are missing:\\nSinghania, P., Singh, S., He, S., Feizi, S., & Bhatele, A. (2024). Loki: Low-Rank Keys for Efficient Sparse Attention. arXiv preprint arXiv:2406.02542.\\nTang, J., Zhao, Y., Zhu, K., Xiao, G., Kasikci, B., & Han, S. (2024). Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference. arXiv preprint arXiv:2406.10774.\\n\\nW2. The key measures of the targeted task should be have more accurate inference with lower memory footprint and latency. I do not agree with the methodology of not comparing with other \\\"non attention-free\\\" methods.\\n\\nW3. The presentation of experiments need to be improved: Lack of discussions and intuitions in the experiment analysis. For example, why does LSH-E outperform Full in Figure 4a; why does LSH-E become worse than L2 after 50% cache budget in Figure 4b? We have many subsubsections in the experiments, but most contents in those are barely text illustration of the figure and result while no discussion of why we would have those results.\\n\\nW4. The execution time of the proposed system is missing.\\n\\nW5. The discussion of the error introduced by the LSH is not included. I wonder what if we use cosine similarity to evict the cache instead of LSH, how will be the accuracy, latency, and memory usage?\\n\\nW6. In the supplementary materials, we see more experiments with more baselines that are better than L2. I wonder the reason why the authors do not include them.\\n\\n\\nPresentation\\n----\\nP1. Line 180 \\\"heavy hitters' -> ``heavy hitters''\\nP2. The axis captions of the figures are too small to be seen.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors Pt 2\", \"comment\": \"### Table 1: Results of LongBench GovReport and MultiNews Summarization with Throughput\\n| | | GovReport | MultiNews | | |\\n|---|---|---|---|---|---|\\n| Strategy | Cache Budget | Rouge L | Rouge L | Decode Toks Per Sec | Prefill Toks Per Sec |\\n| Full | 100% | 0.230 | 0.192 | 16.071 | 16573.492 |\\n| LSH-E | 30% | 0.202 | 0.180 | 22.880 | 20293.524 |\\n| L2 | 30% | 0.201 | 0.165 | 23.981 | 20628.160 |\\n| H2O | 30% | 0.219 | 0.175 | 21.555 | 13025.776 |\\n| Scissorhands | 30% | 0.214 | 0.175 | 21.448 | 13004.254 |\\n| LSH-E | 50% | 0.217 | 0.186 | 22.846 | 20459.961 |\\n| L2 | 50% | 0.214 | 0.174 | 16.013 | 15851.952 |\\n| H2O | 50% | 0.225 | 0.181 | 21.973 | 13969.985 |\\n| Scissorhands | 50% | 0.219 | 0.182 | 20.978 | 13549.967 |\\n| LSH-E | 70% | 0.223 | 0.187 | 22.914 | 21002.334 |\\n| L2 | 70% | 0.223 | 0.187 | 24.305 | 21303.763 |\\n| H2O | 70% | 0.229 | 0.184 | 21.793 | 14050.521 |\\n| Scissorhands | 70% | 0.226 | 0.183 | 21.705 | 13954.693 |\\n| LSH-E | 90% | 0.228 | 0.185 | 22.873 | 21229.230 |\\n| L2 | 90% | 0.230 | 0.186 | 24.010 | 21305.693 |\\n| H2O | 90% | 0.227 | 0.181 | 21.665 | 14007.697 |\\n| Scissorhands | 90% | 0.230 | 0.182 | 21.411 | 14025.440 |\\n| Fastgen | Attention recovery frac 70% | 0.192 | 0.129 | 12.752 | 1171.069 |\\n| Fastgen | Attention recovery frac 75% | 0.231 | 0.174 | 12.291 | 1157.987 |\\n| Fastgen | Attention recovery frac 80% | 0.232 | 0.184 | 11.850 | 1142.679 |\\n| Fastgen | Attention recovery frac 85% | 0.236 | 0.183 | 11.658 | 1164.689 |\\n\\n\\n--- \\nThank you for your review. If we have addressed your questions, we would appreciate it if you would consider updating your score. If any other questions or concerns remain, please let us know.\\n\\n\\n[1] Kitaev, N., Kaiser, \\u0141., & Levskaya, A. (2020). Reformer: The efficient transformer.\\n\\n[2] Han, I., Jayaram, R., Karbasi, A., Mirrokni, V., Woodruff, D. P., & Zandieh, A. (2023). Hyperattention: Long-context attention in near-linear time.\\n\\n[3] Zandieh, A., Daliri, M., & Han, I. (2024). QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead.\\n\\n[4] Zandieh, A., Han, I., Daliri, M., & Karbasi, A. (2023, July). Kdeformer: Accelerating transformers via kernel density estimation.\\n\\n[5] Zandieh, A., Han, I., Mirrokni, V., & Karbasi, A. (2024). SubGen: Token Generation in Sublinear Time and Memory.\\n \\n[6] Zhang, Z., Sheng, Y., Zhou, T., Chen, T., Zheng, L., Cai, R., ... & Chen, B. (2023). H2o: Heavy-hitter oracle for efficient generative inference [...].\\n\\n[7] Liu, Z., Desai, A., Liao, F., Wang, W., Xie, V., Xu, Z., ... & Shrivastava, A. (2024). Scissorhands: Exploiting the persistence [...]\\n\\n[8] Bai, Y., Lv, X., Zhang, J., Lyu, H., Tang, J., Huang, Z., ... & Li, J. (2023). Longbench: A bilingual, multitask benchmark for long context understanding.\\n\\n[9] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model tells you what to discard: Adaptive kv cache compression for llms.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe greatly appreciate your feedback. We have addressed your questions and concerns in our rebuttal. Please let us know if you have any further comments.\\n\\nThank you.\"}" ] }
0YxvqG9SsJ
Offline Model-Based Skill Stitching
[ "Penglin Cai", "Feiyang Xie", "Haoqi Yuan", "Zongqing Lu" ]
We study building agents capable of solving long-horizon tasks using offline model-based reinforcement learning (RL). Existing RL methods effectively learn individual skills. However, seamlessly combining these skills to tackle long-horizon tasks presents a significant challenge, as the termination state of one skill may be unsuitable for initiating the next skill, leading to cumulative distribution shifts. Previous works have studied skill stitching through online RL, which is time-consuming and raises safety concerns when learning in the real world. In this work, we propose a fully offline approach to learn skill stitching. Given that the aggregated datasets from all skills provide diverse and exploratory data, which likely includes the necessary transitions for stitching skills, we train a dynamics model designed to generalize across skills to facilitate this process. Our method employs model predictive control (MPC) to stitch adjacent skills, using an ensemble of offline dynamics models and value functions. To mitigate overestimation issues inherent in models learned offline, we introduce a conservative approach that penalizes the uncertainty in model and value predictions. Our experimental results across various benchmarks validate the effectiveness of our approach in comparison to baseline methods under offline settings.
[ "Skill stitching", "Offline reinforcement learning", "Model-based planning" ]
https://openreview.net/pdf?id=0YxvqG9SsJ
https://openreview.net/forum?id=0YxvqG9SsJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zvgOSSIZre", "o0nHc2zGvj", "iwK91aAHJR", "ZTYlCurVWQ", "Rk2Gf3CdBO", "PnRWxRJgub", "LI1xnNzJEr", "JCfNhugqsf", "6ctDyJh1ZU", "50XXM4n1XI" ], "note_type": [ "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1732061846582, 1732525198073, 1733380860912, 1732062785963, 1732062905946, 1732062142853, 1730689953568, 1732792125097, 1730212362242, 1730679092863 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10675/Authors" ], [ "ICLR.cc/2025/Conference/Submission10675/Reviewer_sWqG" ], [ "ICLR.cc/2025/Conference/Submission10675/Authors" ], [ "ICLR.cc/2025/Conference/Submission10675/Authors" ], [ "ICLR.cc/2025/Conference/Submission10675/Authors" ], [ "ICLR.cc/2025/Conference/Submission10675/Authors" ], [ "ICLR.cc/2025/Conference/Submission10675/Reviewer_sWqG" ], [ "ICLR.cc/2025/Conference/Submission10675/Reviewer_jg8h" ], [ "ICLR.cc/2025/Conference/Submission10675/Reviewer_gcaq" ], [ "ICLR.cc/2025/Conference/Submission10675/Reviewer_jg8h" ] ], "structured_content_str": [ "{\"title\": \"Thanks for your review! Here, we respond to your comments and address the issues. We hope to hear back from you if you have further questions!\", \"comment\": \"## Weaknesses:\\n\\n1. About originality and baselines:\\n\\nPEX [1], OPAL [2] and LPD [3] all tackle challenges in single long-horizon tasks. For example, PEX [1] introduces offline-trained policy $ \\\\pi_\\\\beta $ and online-trained policy $ \\\\pi_\\\\theta $, but these two policies are used as a policy pool to execute within a single task. During the execution phase, an action is produced from the joint policy $\\\\Pi=[\\\\pi_\\\\beta, \\\\pi_\\\\theta]$ for each step. OPAL [2] introduces hierarchical structures to stitch primitive policies. \\n\\nDifferent from previous literature, our paper lies in a totally different problem setting, where we regard each task as a skill. In other words, we consider task-level stitching. The largest gap between primitive-level stitching and task-level stitching is that task-level stitching requires inserting an unseen trajectory sequence, which is not needed in primitive-level stitching. Considering an example of task-level stitching: microwave $\\\\to$ kettle, there is a giant gap between the last state of finishing the task \\u201cmicrowave\\u201d and the initial state to accomplish the task \\u201ckettle\\u201d. However, for a primitive-level case such as \\u201ckettle = move + grasp + lift\\u201d, there is no gap when switching from \\u201cgrasp\\u201d to \\u201clift\\u201d. In a word, our method tackles the challenge of skill stitching in the case where there are large gaps between two task-level skills, which is different from previous works [1, 2, 3].\\n\\n[1] Zhang, Haichao, We Xu, and Haonan Yu. \\\"Policy expansion for bridging offline-to-online reinforcement learning.\\\" arXiv preprint arXiv:2302.00935 (2023).\\n\\n[2] Ajay, A., Kumar, A., Agrawal, P., Levine, S., \\\\& Nachum, O. (2020). Opal: Offline primitive discovery for accelerating offline reinforcement learning. arXiv preprint arXiv:2010.13611.\\n\\n[3] Yang, Y., Hu, H., Li, W., Li, S., Yang, J., Zhao, Q., \\\\& Zhang, C. (2023, June). Flow to control: Offline reinforcement learning with lossless primitive discovery. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 9, pp. 10843-10851).\\n\\n## Questions:\\n\\n1.\\tWhat if the model learns inaccurately in a complex environment?\\n\\nThe inaccuracy of value functions and dynamics models will potentially lead the stitching process to a sub-optimal state, thus resulting in a failure during task execution. In our work, we adopt model ensemble to reduce the inaccuracy of both value functions and dynamics models.\\n\\n2.\\tCan you use the normalized score for the experimental results?\\n\\nWe use the absolute scores (the number of tasks accomplished) previously since we study the problem of task-level skill stitching. Normalized scores are just $\\\\frac{\\\\text{absolute scores}}{\\\\text{the number of tasks in total}} \\\\times 100\\\\\\\\%$. We have modified all the experimental results into normalized scores, as shown in the tables with green color in the revised paper.\"}", "{\"comment\": \"Thanks for your replay. I still be concerned about the experiments. The gap between primitive-level stitching and task-level stitching may be large. However, it is still valuable to verify the primitive-level stitching methods in these tasks. Therefore, I keep my score.\", \"suggestions\": \"The experimental results in Table 1-4 are hard to follow. You can find ways to make them more understandable.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Thanks for your review! Here, we respond to your comments and address the issues. We hope to hear back from you if you have further questions!\", \"comment\": \"## Weaknesses:\\n\\n1.\\tThe assumption of the availability of a dataset for each skill is a strong assumption, is there a way to relax it? For example learning diverse skills from one offline dataset? Is this possible and is there any related work that focus on this problem?\\n\\nWe are not sure what the \\\"one dataset\\\" here exactly refers to.\\n\\n- For one offline dataset with various single skills, we can divide it into parts of each skill. \\n- For one offline dataset with only long-horizon tasks containing many skills in order, we can divide it into parts of each skill.\\n- However, for one offline dataset with only one single skill, it might be unsolvable, since we have no access to the trajectories of other skills. The policies, value functions, and dynamics models trained on datasets of only one skill, are possibly unable to generalize to out-of-distribution states and transitions.\\n\\n2.\\tTraining each skill separately via offline RL seems expensive and time-consuming.\\n\\nWe train a multi-task policy instead of multiple sing-task policies in both Maze and Kitchen, which can be viewed as a goal-conditioned policy. Take Kitchen as an example. Following the tradition of vector-based RL, we use a 7-dimensional one-hot vector (since there are 7 tasks in Kitchen) to denote which skill is being executed, and the one-hot vector is concatenated with the observation vector before taken by the policy network as input. We have made this clear in the revision (modified texts in blue in Section 4.1.2).\\n\\n3.\\tFor some hyperparameters it is not clear to me they have been chosen, for example the maximum steps of skill execution seems very problem dependent.\\n\\nThe maximum steps for each environment are independent. To be concrete, in the Maze environment, the maximum steps for a task containing 2 / 3 / 4 skills are all 50, including 5 steps in maximum of each stitching part between two adjacent skills. In the Kitchen environment, the maximum steps for 2 / 3 / 4 skills are 250 / 360 / 480 respectively, including 20 steps in maximum of each stitching part between two adjacent skills. In the Robosuite environment, the maximum number of step for open door $\\\\to$ close door is 50, and the maximum number of step for can $\\\\to$ can is 100, including 20 steps in maximum of each stitching part between two adjacent skills. Note that we utilize action spaces referred from MAPLE, which greatly reduces the number of steps needed in the Robosuite benchmark compared to previous literature. We have made this clear in the revision (modified texts in blue in Appendix B.3).\\n\\n4.\\tThe method does not seem effective on more complicated tasks (for example in table 2 the method fails in accomplishing more than one skill regardless of the number of skills in the task), but it is still better than the baselines.\\n\\nThis is highly due to the low success rate of the first skill, which is not caused by our stitching method. For example, the success rates of the 7 tasks in Kitchen are listed below: \\n\\n| Task | IQL Success Rate |\\n|:------:|:------------------------:|\\n| bottom burner | 0.96 |\\n| top burner | 0.00 |\\n| light switch | 0.00 |\\n| slide cabinet | 0.00 |\\n| hinge cabinet | 0.00 |\\n| microwave | 0.88 |\\n| kettle | 0.27 |\\n\\nSince our experimental results are based on the average over all possible permutations, the absolute value of scores will be inevitably low. For example, consider the first row of Table 2 in the paper, where we test over $A_7^2=42$ combinations of tasks. Among the final scores of these 42 tasks, at least $4\\\\times 6=24$ of them will be 0, since 4 of the 7 tasks cannot be accomplished at all. Theoretically, the upper bound of the score accomplished by an arbitrary (even expert) algorithm should be $0.96 \\\\times 2 \\\\times \\\\frac{1}{7} + 0.00 \\\\times 2 \\\\times \\\\frac{4}{7} + 0.88 \\\\times 2 \\\\times \\\\frac{1}{7} + 0.27 \\\\times 2 \\\\times \\\\frac{1}{7} = 0.603$ (without normalize, as the revised version has normalized the results according to another reviewer). As shown in Table 2 (before revision), Our MB-Stitch has achieved 0.518, which is close to the upper bound, and outperforms the baselines.\"}", "{\"title\": \"Official Comment of Rebuttal (Cont'd)\", \"comment\": \"## Questions:\\n\\n1.\\tFor the maze experiments, can you compare to offline goal conditioned RL for example goal-conditioned IQL?\\n\\nIn implementation, we already use one multi-task policy (i.e., the goal-conditioned policy) instead of four single-task policies to accomplish the four skills, since the goal information is naturally contained within the observations of the Maze environment. We have made this clear in the revision (modified texts in blue in Section 4.1.2).\\n\\n2.\\tFor the MF-stitching baseline, do you train the model-free stitching policy for each two adjacent skills?\\n\\nThe model-free stitching policy in Maze or Kitchen is also a multi-task one, i.e., conditioned on a vector denoting which two adjacent skills are under consideration. The model-free stitching policies in Robosuite are two different ones (one is for door $\\\\to$ close door; another is for can $\\\\to$ can).\\n\\n3.\\tHow does the method perform for each skills permutation? Is it better under some permutations and worse in others?\\n\\nIt is better under some permutations and worse in others. However, this is highly due to the magnitude of the gap during stitching. As shown in the table above (the table in the former rebuttal comment), the RL policies (IQL) are naturally suitable for some of the tasks, while relatively unsuitable for the others. Intuitively, a lower success rate of a particular skill indicates a lower suitability of executing this skill, and could results in a larger gap to stitch from the previous skill to this one. Empirically, a moderate gap can be suitable for our stitching method to handle. However, when the gap is too small, i.e., the RL policy itself can handle the next skill quite well without stitching, adding the stitching process will possibly lead to a negative effect, resulting in worse performances.\"}", "{\"title\": \"Thanks for your review! Here, we respond to your comments and address the issues. We hope to hear back from you if you have further questions!\", \"comment\": \"## Weaknesses:\\n\\n1. Lack of novelty:\\n\\n[1] trains a critic $Q(s, a)$ with goal-relabeled short sub-trajectories, then trains a value-conditioned diffusion model to generate trajectories given a goal, and finally executes the actions in the generated trajectories. [2] introduces trajectory stitching for synthesizing new trajectories (as a method of data augmentation), and trains BC policies over augmented datasets. Although [3] uses model-based rollouts (planning) for skill-based task planning, it has less to do with skill stitching.\\n\\nUnlike previous literature [1, 2], our paper lies in a totally different problem setting, where we regard each task as a skill. [1] and [2] both stitch sub-trajectories within a single task, while we consider task-level stitching in our paper. When generating an extra sub-trajectory $\\\\tau_2$ to be stitched after the former sub-trajectory $\\\\tau_1$, there is no gap between $\\\\tau_1$ and $\\\\tau_2$. However, when stitching two task-level skills (e.g., microwave and kettle), the gap between two adjacent tasks is rather large, which is the biggest difference in between. In a word, our method tackles the challenge of skill stitching in the case where there are large gaps between two task-level skills, which is different from previous works [1, 2].\\n\\n[1] Stitching Sub-trajectories with Conditional Diffusion Model for Goal- Conditioned Offline RL (AAAI 2024)\\n\\n[2] Model-based Trajectory Stitching for Improved Offline Reinforcement Learning (Offline RL Workshop @ NeurIPS 2022)\\n\\n[3] Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks (IJCAI 2024)\\n\\n## Questions:\\n\\n1.\\tI wonder if the value function properly evaluates states that have not been visited (during stitching). As the value functions for each skill are learned distinctly, how can the value evaluation in the stitched space be accurate and reliable?\\n\\nLearning the value functions distinctly for each skill does not mean the inaccuracy. As long as the used value function represents the relative suitability (possibility) of accomplishing the next skill (task), it can be utilized to guide the process of stitching. Besides, we adopt the trick of model ensemble to reduce the inaccuracy of value functions, which can make value estimation more reliable.\\n\\n2.\\tHow might the proposed method be adapted to handle low-coverage offline datasets?\\n\\nIn the phase of learning individual skills, our method faces the same challenges as offline RL methods when data coverage is limited. However, in the skill stitching phase, our approach can mitigate the scarcity of each individual skill dataset by training on the union of all datasets, where the necessary transitions between skills are more likely to be present. Additionally, our model-based approach leverages the generalization capabilities of world models to fill in missing transitions, while incorporating a conservative strategy to mitigate distribution shift.\\n\\n\\n3.\\tI wonder if the authors considered any techniques to reduce the computational burden of MPC in continuous or stochastic environment?\\n\\nTo accelerate MPC, techniques such as approximate planning (e.g., TD-MPC [4], which uses learned value functions to bootstrap planning), batch computation, model compression and distillation, and receding horizon control [5] could be applied. While these methods are promising, we have not focused on efficiency in this work, as our primary contribution is improving the stitching performance of offline-learned skills.\\n\\n[4] Hansen et al., \\\"Temporal difference learning for model predictive control.\\\" (2022).\\n\\n[5] Mayne and Michalska. \\\"Receding horizon control of nonlinear systems.\\\" (1988)\\n\\n4.\\tWhat potential strategies could be considered for improving generalization or adaptability to dynamic environments within the constraints of offline learning?\\n\\nData augmentation could be properly considered, utilizing generative models. Through data augmentation, we could get larger quantities of data with a higher coverage of states and transitions.\\n\\n5.\\tMinor typos.\\n\\nThank you for pointing out the typos! We have corrected these in red color in the revision.\"}", "{\"summary\": \"This paper investigates the development of agents capable of addressing long-horizon tasks through offline model-based reinforcement learning (RL). While current RL methods excel at learning individual skills, they struggle with integrating these skills to accomplish extended tasks due to the mismatch between the termination of one skill and the initiation of another, resulting in distribution shifts. The authors propose an offline approach to skill stitching, leveraging aggregated datasets from various skills to train a dynamics model that can generalize across different skills. This model, along with an ensemble of offline dynamics models and value functions, is used to stitch adjacent skills through model predictive control (MPC). To address the overestimation issues common in offline model learning, a conservative method is introduced to penalize uncertainty in model and value predictions. The study's experimental results demonstrate the effectiveness of this approach over baseline methods in offline settings across multiple benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper is written well. The method is esay to follow.\\n2. This work is evaluted on various domains.\", \"weaknesses\": \"1. The originality of this work is quietly limited. The idea of stitching skills based on value functions is not new; many papers have proposed similar approaches. For example, PEX [1].\\n2. A large number of baseline algorithms are missing. For example, OPAL [2] and LPD [3].\\n\\n\\n[1] Zhang, Haichao, We Xu, and Haonan Yu. \\\"Policy expansion for bridging offline-to-online reinforcement learning.\\\" arXiv preprint arXiv:2302.00935 (2023).\\n\\n[2] Ajay, A., Kumar, A., Agrawal, P., Levine, S., & Nachum, O. (2020). Opal: Offline primitive discovery for accelerating offline reinforcement learning. arXiv preprint arXiv:2010.13611.\\n\\n[3] Yang, Y., Hu, H., Li, W., Li, S., Yang, J., Zhao, Q., & Zhang, C. (2023, June). Flow to control: Offline reinforcement learning with lossless primitive discovery. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 9, pp. 10843-10851).\", \"questions\": \"1. What if the model learns inaccurately in a complex environment?\\n\\n2. Can you use the normalized score for the experimental results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate your diligent response. While I have carefully reviewed the distinctions you highlighted between your work and existing research on skill stitching, my concerns regarding the originality of this paper remain only partially addressed. Certainly, tackling long-horizon tasks through task-level stitching is a significant challenge, and I find the approach presented in your paper practical. However, the detailed methodology, particularly how the model is employed to address gaps in different sub-trajectories, does not appear to be technically novel. For instance, the multi-step goal chaining described in Section 3.2 of [1] and the model-guided rollouts illustrated in Figure 3 of [3] share similarities in both their approach and purpose with the model-based planning proposed in this paper. I think it would have strengthened the work if these prior studies were compared more thoroughly and included as baselines in the experiments. While a certain degree of technical similarity does not necessarily diminish the value of a paper, my concern lies in the heavy reliance on model-based planning for much of the contribution. As such, I will maintain my original score.\"}", "{\"summary\": \"The paper introduces an algorithm for skill stitching from offline data,the algorithm has two phases, an offline training phase where each skill is extracted from an offline data that contains trajectories representing the skill. And a test phase, where the dynamics model is used for MPC-based skill stitching guided by the value function. The experiments demonstrate the performance of the method in comparison with some baselines; the ablations show that the quality of the data can have a significant effect on the performance of the skill changing as well as the diversity of transitions in the training distribution.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of the offline skill stitching problem is important for real-world applications\\n\\n2. The idea of using a model and planning to stitch the skills is interesting and seems a good direction for further research.\\n\\n3. The method results on the maze are strong compared to the baselines.\\n\\n4. The results are better than baselines in general.\", \"weaknesses\": \"1. The assumption of the availability of a dataset for each skill is a strong assumption, is there a way to relax it? For example learning diverse skills from one offline dataset? Is this possible and is there any related work that focus on this problem?\\n\\n2. Training each skill separately via offline RL seems expensive and time-consuming.\\n\\n3. For some hyperparameters it is not clear to me they have been chosen, for example the maximum steps of skill execution seems very problem dependent.\\n\\n4. The method does not seem effective on more complicated tasks (for example in table 2 the method fails in accomplishing more than one skill regardless of the number of skills in the task), but it is still better than the baselines.\", \"questions\": \"1. For the maze experiments, can you compare to offline goal conditioned RL for example goal-conditioned IQL?\\n\\n2. For the MF-stitching baseline, do you train the model-free stitching policy for each two adjacent skills?\\n\\n3. How does the method perform for each skills permutation? Is it better under some permutations and worse in others?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work explores a model-based approach for offline learning of skills and their sequential stitching using only individual skill datasets, without relying on online interactions with the environment. Unlike existing skill stitching techniques based on online reinforcement learning, this approach utilizes offline data to decompose long-horizon tasks into manageable skills that can be executed sequentially. The focus is on training a dynamics model with aggregated skill datasets, enabling effective model-based planning and incorporating conservative optimization objectives to ensure robust transitions between skills during planning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed offline skill stitching method is straightforward yet effective in certain environments with long-horizon tasks, enabling task completion by sequencing learned skills from offline datasets.\", \"Skill stitching offers a practical approach in hierarchical reinforcement learning, addressing challenges in learning tasks composed of multiple sub-tasks.\"], \"weaknesses\": \"- Lack of novelty: The proposed skill stitching method of evaluating states for stitching using the value function is not novel; it is a fundamental approach used in existing offline RL for trajectory stitching [1, 2]. A comparison with these existing offline trajectory stitching methods is required.\\n\\n[1] Stitching Sub-trajectories with Conditional Diffusion Model for Goal-\\nConditioned Offine RL (AAAI 2024)\\n\\n[2] Model-based Trajectory Stitching for Improved Offline Reinforcement\\nLearning (NeurIPS 2023)\\n\\nThe below work also uses model-based rollouts (planning) for skill-based task planning in offline settings, similar to the proposed method.\\n\\n[3] Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks (IJCAI 2024)\\n\\n-\\tThe proposed method using MPC operates by sampling possible actions and evaluating the value of the resulting states. For continuous action spaces, it requires extensive sampling and evaluation to determine the best outcome. Furthermore, in environments with stochasticity, the MPC optimization can be required at each attempt, leading to significant inefficiencies in time complexity.\\n\\n- The performance gain in the Kitchen appears minimal, raising questions about whether the proposed method is effective in continuous action space settings. In the Maze Runner, the discrete action space makes the MPC method feasible. However, in complex continuous tasks like the Kitchen task, the value function evaluation may be unreliable, requiring MPC to extensively search the possible action space, which may explain the minimal performance gain observed.\\n\\n- The method may not generalize well across diverse environments, especially those with dynamic or unpredictable conditions, as it relies solely on offline data without any consideration on real-time adaptability.\\n\\n- The approach's effectiveness is highly dependent on the diversity of the offline datasets, as the method relies on the learned dynamics model on the aggregated offline datasets.\", \"questions\": [\"I wonder if the value function properly evaluates states that have not been visited (during stitching). As the value functions for each skill are learned distinctly, how can the value evaluation in the stitched space be accurate and reliable?\", \"How might the proposed method be adapted to handle low-coverage offline datasets?\", \"I wonder if the authors considered any techniques to reduce the computational burden of MPC in continuous or stochastic environment?\", \"What potential strategies could be considered for improving generalization or adaptability to dynamic environments within the constraints of offline learning?\", \"Minor Typos:\"], \"line_97\": \"over-estimate \\u2192 overestimate, to match the usage elsewhere in the paper.\", \"line_215\": \"continous actions space \\u2192 continuous action space\", \"line_257\": \"T(\\\\cdot|s_t,a_t) \\u2192 T_{\\\\phi}(\\\\codt|s_t,a_t}\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0YkZe9nwiC
Self-Informed Generative Active Learning
[ "Zixi Huang", "Shiwei Tong", "Fei Wang", "Zhenya Huang", "Zhaofeng Liu", "Hao Yu" ]
Active learning has been a cost-efficient approach to obtaining high-performance AI models with fewer selective annotations. In scenarios where the acquisition of original unlabeled data poses significant challenges, active learning harnessing synthesized data instances is more promising than traditional pool-based methods. In this paper, we propose the Self-Informed Generative Active Learning (SIGnAL) framework as an effective solution to actively generate and select data instances for annotation and downstream model training. In SIGnAL, we propose to guide the data generation based on a reinforcement learning policy, where the generator is self-informed by the reward to generate more informative instances. In addition, we introduce an acquisition function that measures both the informativeness and relevance of instances. Such acquisition function can be transformed to the reward seamlessly for generator optimization. Our experiments on the text classification task validate the effectiveness of our framework, especially when the original data scale is limited.
[ "Active Learning", "Large Language Model", "Synthetic Data", "Reinforcement Learning" ]
Reject
https://openreview.net/pdf?id=0YkZe9nwiC
https://openreview.net/forum?id=0YkZe9nwiC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rXOLjd8pN0", "nPBVbG2geM", "jrywTQToCM", "jprjXFKEt8", "iVjpfDqXxO", "dnNM94kpXp", "O8iKJtg3QB", "K5N8WYNreA", "1IlFzGKfw3" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "decision", "meta_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730483121097, 1730565302893, 1730361510104, 1730921186576, 1737524211032, 1733948347432, 1732627742314, 1732637525485, 1730191302203 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12726/Reviewer_NgKa" ], [ "ICLR.cc/2025/Conference/Submission12726/Reviewer_aZDs" ], [ "ICLR.cc/2025/Conference/Submission12726/Reviewer_VPBQ" ], [ "ICLR.cc/2025/Conference/Submission12726/Reviewer_eSUZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12726/Area_Chair_CwNT" ], [ "ICLR.cc/2025/Conference/Submission12726/Reviewer_VPBQ" ], [ "ICLR.cc/2025/Conference/Submission12726/Reviewer_aZDs" ], [ "ICLR.cc/2025/Conference/Submission12726/Reviewer_JtSL" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes the Self-Informed Generative Active Learning (SIGnAL) framework, which generates synthetic data to improve active learning when real data is scarce. Using reinforcement learning, SIGnAL\\u2019s generator produces informative data guided by a reward system, ensuring relevance and usefulness for model training. An acquisition function assesses this data\\u2019s informativeness and relevance, optimizing the generator\\u2019s outputs. Experiments on text classification validate SIGnAL\\u2019s effectiveness, particularly in data-limited scenarios, offering a cost-efficient solution.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The idea is interesting and novel, the paper is easy to follow.\", \"weaknesses\": \"1. The experiments are far from sufficient for a top-tier conference, now there is only overall performance but lack of ablation study and analysis.\\n2. As a method that combines active learning and synthetic data generation from LLM, the authors only compare it with active learning approaches, I think they should also compare the proposed method with synthetic data generation without active learning\", \"missing_related_work\": \"[1] Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias\\n[2] ZeroGen: Efficient Zero-shot Learning via Dataset Generation\\n[3] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding\", \"questions\": \"In the equation of line 186, the distribution shouldn't be p_z because there is synthetic data while p_z is defined as real data, right?\", \"line_284\": \"missing space between the and generate\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Self-Informed Generative Active Learning (SIGnAL), a RL-based approach for query-synthesizing active learning. SIGnAL generates synthetic data instances to enrich the data pool, especially when access to diverse, unlabeled real data is limited. Experimental results show SIGnAL\\u2019s performance advantage over traditional pool-based methods in text classification tasks, particularly when the data pool is very small.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The method addresses the limitations of traditional pool-based methods by generating informative synthetic data instances, this could be beneficial when even unlabeled data is scarce.\\n2. The paper is mostly well-organized.\\n3. The acquisition function that combines both informativeness and relevance makes sense.\", \"weaknesses\": \"1. The proposed SIGnAL does not generate the most informative/beneficial data point for labeling, instead, it still requires traditional acquisition function to make the selection. I think this is a critical weakness to this paper. From my understanding, generative AL should not only generate data samples, but more importantly generate the most informative samples.\\n2. The settings of this paper is king of niche, most areas that benefit from AL have abundant amount of unlabeled data, if SIGnAL simply generates more unlabeled data, I don't see it being very useful in practice.\\n3. The acquisition (relevance and informativeness) is quite simple, relevance is simply the distance, with informativeness directly taken from CAL.\\n4. The experiments are very limited. The only results are in Figure 3, with limited datasets, baselines, and the improvements are hardly distinguishable in my opinion. \\n\\nIn general I think this paper presents an interesting direction, but the details needs a bit more refinements.\", \"questions\": \"As discussed in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper leverages a RL policy to guide data generation, allowing the generator to receive rewards that encourage the creation of more informative instances. Additionally, it introduces an acquisition function that evaluates both informativeness and relevance, seamlessly transforming this evaluation into rewards for optimizing the generator.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"By utilizing reinforcement learning, the approach effectively addresses the challenges posed by the dynamic and delayed nature of informativeness, treating instance informativeness as the reward signal to optimize the generative model. The method incorporates an acquisition function that evaluates both traditional informativeness and the relevance of data instances, transforming these evaluations into rewards during training.\", \"weaknesses\": \"The paper provides a detailed analysis of the challenges faced by pool-based active learning methods; however, it lacks an introduction to existing query-synthesizing methods and a distinction between the proposed method and existing synthesizing-based methods, such as \\u201cLADA: Look-Ahead Data Acquisition via Augmentation for Deep Active Learning\\u201d and \\u201cWhen Active Learning Meets Implicit Semantic Data Augmentation\\u201d. However, synthesizing-based methods are one of the primary categories in the active learning scenarios.\\n\\nThe PPO reinforcement learning method is utilized in this paper to optimize active learning strategies for larger rewards. Could you provide a detailed explanation of the state and action settings in this reinforcement learning scenario? Additionally, is it worth considering adopting the classifier's accuracy as an additional reward after generating samples?\", \"regarding_the_experiments\": \"The baseline methods adopted in the paper are all pool-based active learning methods. To further validate the effectiveness of your method, it is suggested to compare with synthesizing-based methods as well. Moreover, according to the experimental setup, synthesizing-based methods annotated twice as much more data, which could account for their superior performance. It is recommended to include ablation studies to provide additional explanations.\", \"questions\": \"The PPO reinforcement learning method is utilized in this paper to optimize AL strategies for larger rewards. Could you provide a detailed explanation of the state and action settings in this RL scenario?\\n\\nAdditionally, when considering other RL-based active learning strategies, is it worth considering adopting the classifier's accuracy as an additional reward after generating samples?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses active learning by leveraging a generative model to produce unlabeled examples, which are then labeled by an oracle and added to a classification model's training set. Unlike traditional methods that rely on a fixed pool of unlabeled data, this approach actively generates new, potentially more informative examples. The model prioritizes examples based on their distance from nearest neighbors and the discrepancy in predictions between the generated sample and its neighbors. To guide the generative model in producing high-quality samples, it is trained via a Reinforcement Learning algorithm (PPO), optimizing it to generate samples that best serve the classification task. The method is tested on text classification problems, showing mixed results compared to current state-of-the-art techniques.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The approach is promising; using a generator to produce new samples is a valuable innovation for improving active learning systems. This strategy assumes a pre-trained generative model, which is reasonable for text but may not be universal across domains. The selection criterion is sensible, and directly training the generator to maximize it through RL is more robust than simple thresholding.\", \"weaknesses\": \"However, the experimental section lacks detail to fully evaluate the approach. Key hyperparameters\\u2014such as the number of samples generated per iteration and PPO settings\\u2014are not systematically analyzed, and no ablation study is provided. It would also be valuable to see a comparison of results with and without the RL approach. The current experimental section leaves significant space unexplored, making it hard to discern the model\\u2019s strengths and weaknesses.\\n\\nIt\\u2019s also unclear how the RL component is applied: Is the policy trained concurrently with sample generation, or is it established before the active learning phase? If the reward function evolves as new samples are generated, this could introduce non-stationarity, which would impact performance. Further clarification on this point is essential.\\n\\nRegarding performance, the results do not clearly outperform existing methods. Notably, the learning curves for the proposed method (Signal) appear to extend longer than others. This might be because other methods are restricted to samples in the original dataset, while Signal can generate an infinite number of examples. However, this is not entirely clear, as baseline methods don\\u2019t achieve fully supervised performance, which raises questions about their comparison criteria.\", \"questions\": \"In conclusion, the paper presents an interesting idea, but the experimental section needs significant refinement. Adding more comprehensive experiments and ablation studies would strengthen the conclusions and clarify the potential of this approach.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper proposes a new method for active learning called Self-Informed Generative Active Learning (SIGNAL). The method uses a generative model to produce new examples, which are then labeled and added to the training set. This allows the model to actively generate new and potentially more informative examples. The model prioritizes examples based on their distance from nearest neighbors and the discrepancy in predictions between the generated sample and its neighbors. A reinforcement learning algorithm is used to train the generative model to produce high-quality samples. The method is tested on text classification problems.\\n\\n(b) Strengths of the paper\\nThe approach is promising.\\n\\nUsing a generative model to produce new samples is a valuable innovation for improving active learning systems.\\n\\nThe selection criterion is sensible.\\n\\nDirectly training the generator through reinforcement learning is more robust than simple thresholding.\\n\\nThe paper is well-organized.\\n\\nThe acquisition function that combines both informativeness and relevance makes sense.\\n\\n(c) Weaknesses of the paper\\nThe experimental section lacks detail and does not fully evaluate the approach.\\n\\nKey hyperparameters are not systematically analyzed and no ablation study is provided.\\n\\nA comparison of results with and without the reinforcement learning approach would be valuable.\\n\\nIt is unclear how the reinforcement learning component is applied.\\n\\nThe results do not clearly outperform existing methods.\\n\\nThe learning curves for the proposed method appear to extend longer than others.\\n\\nThe proposed SIGNAL does not generate the most informative data point for labeling and still requires a traditional acquisition function to make the selection.\\n\\nThe settings of the paper are a niche.\\n\\nMost areas that benefit from active learning have an abundant amount of unlabeled data.\\n\\nThe acquisition is quite simple.\\n\\nRelevance is simply the distance, with informativeness directly taken from CAL.\\n\\nThe experiments are very limited.\\n\\nThe only results are in Figure 3, with limited datasets and baselines, and the improvements are hardly distinguishable.\\n\\nThe experiments are far from sufficient for a top-tier conference.\\n\\nThere is only overall performance but a lack of ablation study and analysis.\\n\\nAs a method that combines active learning and synthetic data generation from LLM, the authors only compare it with active learning approaches.\\n\\nThey should also compare the proposed method with synthetic data generation without active learning.\\n\\n(d) Reasons for rejecting the paper\\nThe paper presents an interesting idea, but the experimental section needs significant refinement. Adding more comprehensive experiments and ablation studies would strengthen the conclusions and clarify the potential of this approach. The proposed SIGNAL does not generate the most informative data point for labeling, instead, it still requires a traditional acquisition function to make the selection. The settings of the paper are a niche, and most areas that benefit from active learning have an abundant amount of unlabeled data. The experiments are far from sufficient for a top-tier conference; there is only overall performance but a lack of ablation study and analysis. As a method that combines active learning and synthetic data generation from LLM, the authors only compare it with active learning approaches, and they should also compare the proposed method with synthetic data generation without active learning.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer aZDs raised concerns about the proposed method not generating the most informative data points and the niche settings of the paper. The reviewer also pointed out that the experiments were limited and the improvements were hardly distinguishable. Reviewer VPBQ asked for a detailed explanation of the state and action settings in the reinforcement learning scenario. The reviewer also suggested comparing the proposed method with synthesizing-based methods. Reviewer JtSL asked about the ratio of generated samples to actual unlabeled data queried at each iteration in the active learning process. The reviewer also asked how the generated samples were human-labeled.\\n\\nThe authors responded to Reviewer aZDs by stating that they would add more experiments and ablation studies. They also stated that they would clarify the settings of the paper. The authors responded to Reviewer VPBQ by providing a detailed explanation of the state and action settings in the reinforcement learning scenario. They also stated that they would compare the proposed method with synthesizing-based methods. The authors responded to Reviewer JtSL by stating that they would add a definition of the text encoder model and an ethics statement. They also stated that they would disclose the hyperparameter settings and add information about the human labeling process. \\u00a0 \\n\\nIn my final decision, I weighed the points raised by the reviewers as follows:\\n\\nI agreed with Reviewer aZDs that the proposed method does not generate the most informative data points and that the settings of the paper are a niche. I also agreed that the experiments were limited and the improvements were hardly distinguishable. \\u00a0 \\n\\nI agreed with Reviewer VPBQ that a detailed explanation of the state and action settings in the reinforcement learning scenario was needed. I also agreed that the proposed method should be compared with synthesizing-based methods. \\u00a0 \\n\\nI agreed with Reviewer JtSL that the ratio of generated samples to actual unlabeled data queried at each iteration in the active learning process was an important consideration. I also agreed that the human labeling process needed to be described.\"}", "{\"comment\": \"As the authors do not provide any feedbacks, I would like to decrease my rating.\"}", "{\"comment\": \"Reviewer VPBQ, I don't think this (you decreasing the original score simply because the authors did not provide a rebuttal) is fair, especially when the rebuttal period is not done yet. It is doing nothing good.\"}", "{\"summary\": \"This paper introduces an active learning approach for NLP tasks utilizing a generative model. It incorporates KL divergence, as proposed in CAL, to retrieve informative samples and uses inter-sample distance to avoid querying unrelated samples. The method outperforms comparable approaches and can continue the active learning process even without access to further unlabeled data by leveraging generated samples. However, the limited datasets and reliance on LLM raise questions about the necessity of the approach.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed approach outperforms other techniques, such as CAL, BERTKIM, and BADGE.\\n2. It allows performance gains by querying generated data after exhausting unlabeled data.\", \"weaknesses\": \"1. The utility of this approach is ambiguous. Active learning aims to efficiently query valuable samples in low-data regimes, particularly in areas with difficult labeling requirements, such as medical or legal fields.:\\n- 1.1. The paper only presents general datasets (SST-2, AGNEWS, QNLI) focused on tasks like sentiment analysis and topic classification, where active learning might be unnecessary. Given that the LLM itself can achieve higher performance on such tasks, training an additional classifier via active learning seems contradictory. For this approach to be useful, the active learning-trained model should outperform the LLM.\\n- 1.2. In this regard, domain-specific datasets, such as PubMed or legal datasets should be added. However, studies suggest that even specific tasks can achieve performance gains without active learning (or human labeling) through LLMs [1], raising questions about this method's utility compared to such approaches.\\n2. The number of datasets and class diversity are limited, with only three datasets and two or four classes per dataset. Include datasets with more classes, like DBPEDIA with 14 classes, to address whether the proposed method benefits persist as class counts increase.\\n3. The main paper lacks a definition of $\\\\Phi$, which can only be inferred as a text encoder model.\\n4. No ethics statement is provided. An ethics statement and societal impact are mandatory for ICLR.\\n5. Hyperparameters are not disclosed. Without code submission, at least hyperparameter settings or a code statement should be included.\\n6. Time consumption details are missing. Given the method's reliance on LLMs and RL and the continuous dataset expansion, it likely requires considerably more time than alternative methods. Please add this information.\\n\\n[1] Kim et al., \\\"SELF-EXPERTISE: Knowledge-based Instruction Dataset Augmentation for a Legal Expert Language Model\\\"\", \"questions\": \"1. What is the ratio of generated samples to actual unlabeled data queried at each iteration in the active learning process? If generated samples are only queried after unlabeled data, their value seems minimal.\\n2. How were the generated samples human-labeled? It\\u2019s likely some generated samples are incoherent, making labeling challenging. The paper includes no information about the human labeling process.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0Yfjerm9Zp
Enhancing LLM Faithfulness in Rationale Generation via Dual-Reward Probabilistic Inference
[ "Hanqi Yan", "Jiazheng Li", "Yulan He" ]
As large language models (LLMs) are increasingly applied to complex reasoning tasks, achieving both accurate task performance and faithful explanations becomes crucial. However, LLMs often generate unfaithful explanations, partly because they do not consistently adhere closely to the provided context. Existing approaches address this problem either rely on superficial calibration, such as decomposed Chain-of-Thought prompting, or require costly retraining to improve model faithfulness. In this work, we propose a probabilistic inference paradigm that provides fine-grained and lookahead rewards to ensure that LLM-generated rationales are logically coherent and comprehensive. These rewards are derived from a domain-specific proposal distribution, allowing for optimised sequential Monte Carlo approximations. Our evaluations across three different reasoning tasks show that this method, which allows for controllable generation during inference, improves both accuracy and faithfulness of LLMs while keeping computational costs similar to those of existing decoding techniques. This method offers a promising path towards making LLMs more reliable for reasoning tasks without sacrificing performance or efficiency.
[ "interpretability", "faithfulness", "Large language model", "constrained generation" ]
https://openreview.net/pdf?id=0Yfjerm9Zp
https://openreview.net/forum?id=0Yfjerm9Zp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z3rgzACO5F", "ysg3cMTWAb", "xNbTLmFCIt", "v4onCxvtl8", "uiSyymDeGN", "u7UDK9yCgf", "r32Bh8ThM0", "pvpcU4Immp", "lXrBIgPwse", "l3GvfCazs9", "jslAwHzm1j", "hZ3MsolwUZ", "fWP5NDOBEJ", "dIXgq67hZC", "cR5rACzJXg", "bxdLz6ZkTs", "aDaYXW9b0U", "YuTx8Doxga", "VfvGzynrbS", "V8hDUHRQbM", "SjvvFznqyA", "S2f58emKOc", "MNZuJYobWD", "LQ0d3fO2DR", "Ko2VODSa3K", "FLWtKXSuDc", "C7KMpz9Diy", "BXoLOxaaJJ", "9h5OA36Hb1", "9W3lzy175s", "7Ds3nVw1gU", "21pBL1e9hZ" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732766747159, 1730759571303, 1732672605448, 1732605395342, 1732239982606, 1732617355740, 1733245463891, 1732567837007, 1732244457198, 1732241156077, 1733172448444, 1730705308942, 1732247395285, 1730441273467, 1733154510835, 1732770452619, 1732567438238, 1732234649167, 1732539055272, 1732604943397, 1732247105266, 1732567631174, 1733171911574, 1732771728258, 1732604886381, 1732237388573, 1732264494854, 1732567304000, 1734306901614, 1732263449835, 1733154748538, 1729499305436 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Reviewer_TvLG" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Reviewer_MSQh" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Reviewer_1TJ5" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Reviewer_MSQh" ], [ "ICLR.cc/2025/Conference/Submission14116/Reviewer_msmv" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Reviewer_MSQh" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Reviewer_TvLG" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Reviewer_MSQh" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Reviewer_MSQh" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Authors" ], [ "ICLR.cc/2025/Conference/Submission14116/Reviewer_1TJ5" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer MSQh\", \"comment\": \"We sincerely appreciate the reviewers' valuable time and thoughtful feedback, which have provided us with an opportunity to further clarify our contributions and strengthen our work.\\n\\n> Inconsistant notation, $f_\\\\theta$, $\\\\pi_t$ \\n\\nWe totally agree that consistency and clarity are very important, so we have updated the PDF to keep the complete form. Actually, We use $f_\\\\theta$ refers to a parameterised function, explicitly highlighting that $\\\\theta$ are the parameters. While in Algorithm 1, the parameters are not the focus, so we use $f$ throughout the algorithm. For $\\\\pi_t$ and $\\\\pi$ in the localmask function, we omit the $t$ as we have the input within the function $\\\\pi$, such as $\\\\pi(x_t=v\\u2019|x_{t-1})$. \\n\\n> How Algorithm connected to Feynman-Kac and search algorithm \\n\\n**Q1: No Adjustment to $\\\\pi$?**\\n\\n**R1** The Feynman-Kac framework is a classical framework proposed in 2024 and serves as a general foundation for probabilistic inference. Its key idea is to introduce the potential function $G_t(s_{t-1}, s_t, f_\\\\theta)$, which acts as a reward function to score the current state. Some methods modify the policy $\\\\pi_t$\\u200b to derive a new policy $\\\\pi\\u2019_t$. An example is our localmask method. In contrast, our global reward mechanism reweights the output probabilities using an expert model, allowing tokens with relatively low probabilities to be sampled while eliminating high-probability tokens. In that way, we could change the selected topk tokens according to the score from expert model. **Based on the changes in the output sequence at step $t$, the backbone model will generate differently at timestep $t+1$, as the conditional generation depends on the updated conditioned input.** It\\u2019s worth noting that there are numerous practical approaches to modifying output distributions. The core idea of the Feynman-Kac framework is to employ a function with lookahead characteristics, which helps avoid local optima.\\n\\n**Q2: Why search algorithm?**\\n\\n**R2**: The existing Monte Carlo Tree Search (MCTS) algorithm, utilizes a policy model to generate multiple candidate nodes. These nodes are then evaluated by a reward model, and the state (node) with the highest reward is selected for tree expansion. Similarly, in our Algorithm 1, at each timestep, the backbone model generates K tokens (where K is the beam search size). These tokens are then evaluated by the expert model. Among the $K \\\\times N$ generated trajectories, only the trajectory with the highest reward is selected. While our search process does not exactly replicate the step-by-step node selection and expansion in MCTS, it generates multiple trajectories in parallel to reduce computational cost and it should still be categorised as a MCTS algorithm, such as the paper [1][2].\\n\\n> I still don't understand why Localmasking would benefit faithfulness. To me it look like a way to improve the accuracy. \\n\\nYes, your understanding is totally correct. Local masking (CLS) benefits the accuracy, while global reward (expert) benefits the faithfulness.\\n\\n> So seems like the whole algorithm is to use domain-expert to predict the correct label, then use the large backbone model to generate some candidate explanations, and use another faithful expert to score them. \\n\\nIn general, yes, we use a domain-expert model, a classifier (CLS), to ensure prediction accuracy, and a generative expert model to enhance faithfulness. However, we would like to clarify two key points:\\n\\n- The predicted label from the CLS is not directly used as the final label.\\n- The expert model is applied step-wise rather than after the sequence generation is complete.\\nThese differences can lead to distinct outcomes. For example, we generate 3 tokens at each timestep $t-1$, the step-wise approach ensures that the top-K tokens are selected based on the expert model\\u2019s score, denoted as $w_t^{e}$\\u200b (e.g., *apple*, *orange*, *peach*). This selection may differ from the top-K tokens determined solely by the backbone model\\u2019s output distribution, $w_{t-1}^{b}$\\u200b (e.g., *this*, *the*, *that*). Each token in $w^{e}_{t-1}$ is then incorporated into the conditional context to guide the backbone model\\u2019s generation at $t$. For instance, the generation at $t$ would be conditioned on that is $\\\\pi_t(x_t|\\\\text{apple})$, rather than is $\\\\pi_t(x_t|\\\\text{this})$. Thus, the expert model provides process-level, fine-grained controllability over the generation, rather than a post-hoc adjustment.\\n\\n> how the class label words $\\\\mathcal{C}$ are chosen still remains.\\n\\nOur label words $\\\\mathcal{C}$ are selected based on the datasets\\u2019 label space, see the details in Line 238-239. Please let us know if you have further concrete question. \\n\\n> CLS The definition of the term \\\"CLS\\\" is removed from the revised paper, where it used to be the first sentence of section 5.1.\\n\\nThank you for noticing the change! We apologise for this mistake and have corrected accordingly in the newly updated revised version.\"}", "{\"summary\": \"The paper proposes an approach to do faithful rationale generation in LLMs. It uses a steering-based approach to make the outputs more faithful to the reasoning of the llm in classification. The idea is to weight token logits using 2 kinds of reward models: A \\\"local\\\" one that tries to match tokens to those suggested by a domain-specific expert model and a \\\"lookahead\\\" one that does an MTCS type search and re-weights logits based on rewards from unrolled sequences.\\n\\nExperiments are performed on a couple of QA type datasets, demonstrating that each method makes improvements in classification accuracy and faithfulness of rationales. Some qualitative analyses are also presented.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. MTCS type inference is a hot topic right now, and it is indeed an important frontier for LLMs to improve on.\\n2. At a surface level, experimental results seem to show large gains.\", \"weaknesses\": \"Section 3 is pretty badly written, it is pretty hard to get the details of the approach. Instead of invoking irrelevant sophisticated-sounding terminology like \\\"Feynman-Kac\\\" formulas it would be better to describe the method in more detail. The math especially is confusing, see below.\\n\\nThe paper seems to show some positive experimental results, but I am concerned about whether we are looking at a meaningful comparison. The proposed methods rely on domain experts. Looking at table 8 in the appendix these are generally models that have been fine-tuned for the task in some way (and not just on the validation sets as the main section claims, some have access to external datasets). So it shouldn't be that surprising that a method that is given access to an expert which has more signal will do better than the backbone pre-trained model. A fair comparison would have to be with an approach that does vanilla fine-tuning of LLama or mixtral model.\", \"in_terms_of_novelty\": \"The authors have not really cited relevant work in the controlled decoding space:\", \"https\": \"//sea-snell.github.io/ILQL_site/\\n\\nThese works already do something more sophisticated than just token reweighting by a reward score. So what is the novel contribution here? 2 possibities:\\n1. Focusing on the faithfulness problem.\\n2. The \\\"lookahead\\\" idea of the reward model. I dont recall having seen this before, but it feels like a simplification of a full-blown MCTS. I would also call this a poor man's version of ILQL.\\n\\nSo we are just left with #1 then, unless I missed something. And this is something I consider of limited novelty (more like an application for a particular problem, though one with interesting implications from the steering perspective).\", \"questions\": \"1. what is \\\"t \\\\wedge T\\\" ?\\n 2. sec 3.3, what is P(s_t) a posterior over?\\n 3. In what sense is \\\\pi_t a \\\"potential\\\" function?\\n 4. I cannot make any sense of eq 2. Is w \\\\in V the same as w_t? why cant you simply remove the indicator function and write it as \\\\sum_w \\\\in C ? why is the indicator function in the deminator as well? is the intent to have a logit distribution that only puts mass on the tokens in C?\\n 5. are the rollouts done on the backbone model or the expert model? have we considered /measured the inference time cost? this is an important consideration in a paper about mtcs type methods.\\n\\n6. Does q_\\\\phi simply reward completions of the output that have tokens in C?\\n\\n7. intro: \\\"in contrast, an expert model.....\\\" : this is an interesting claim (does seem plausible). is there a citation for evidence?\\n\\n8. line 140: tend to generate similar token....\\\": what does this mean?\\n\\n9. i am not up to date on the faithfulness literature, but the kind of interventations that the paper describe as standard ways of evaluation i.e. word inclusion and perturbation just seem to be likely to be noise-prone, leading to unreliable evals?\\n\\n10. GenExpert =? lookahead?\\n\\n11. comment: the discussion between 329-342 helped understanding a bit and should be earlier in the paper.\\n\\n12. sec 5.2.2: the NLI example is a bad one i think. Submergible only means it is something that can be submerged. which doesnt automatically mean it is submerged.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer TvLG: Follow-up Questions\", \"comment\": [\"Many thanks for recognising the improvements in our updated version and for your valuable feedback! We also appreciate your constructive suggestion to further clarify the distinction between our method and ILQL.\", \"Firstly, at a high level, both methods share the similarity of leveraging logit perturbation to enhance the output of the backbone model. For clarity, we denote the backbone model as $M_b$ and the expert model as $M_p$.\", \"Then, we provide a detailed comparison in the following aspects:\", \"**Model Types**\", \"ILQL: Both the backbone model (referred to as the standard language model) and the perturbed model are GPT-2 small.\", \"Ours: Our backbone models are LLaMA3-8B and Mistral-7B, while perturbed models include LLaMA3-8B, LLaMA2-7B, and Mixtral-7B. Importantly, our method supports backbones and perturbed models of differing sizes and tokenisations.\", \"**Does it require extra training before deployment?**\", \"ILQL: Yes, there are three types of $M_{p}$: policy generation $\\\\pi_B$, value function model for $Q$ and $V$ generation, and a target value network. They are trained via implicit Q-learning objectives on the exact dataset same as inference tasks.\", \"Ours: No, we can use any $M_{p}$ pretrained on in-task datasets, or even out-of-task datasets (see response above `Fair Comparison with Vanilla Fine-Tuning of Backbone Model`.)\", \"**Perturbation strategy**:\", \"ILQL: calibrate the original policy using trained $Q$ and $V$, i.e., $\\\\pi(a \\\\mid h) \\\\propto \\\\pi_\\\\beta(a \\\\mid h) e^{\\\\beta(Q(h, a) - V(h))} = \\\\exp(\\\\log(\\\\pi_\\\\beta(a \\\\mid h)) + \\\\beta(Q(h, a) - V(h))).$\", \"Ours: the step-wise reward is given by $M_p$, and then select the sequence with highest sentence-level reward from generated $K$ sequences (see in Algorithm 1). This reward strategy is inspired by our preliminary study in Figure 1 that $M_p$ prefers in-domain text and could contribute to faithfulness.\", \"**Overall analysis**\", \"Our method can fit the situation where $M_b$ and $M_p$ have different but larger backbones. In contrast, ILQL's reliance on training three models for $M_p$ can limit its scalability to larger backbone models, such as fine-tuning a LLaMA3-8B. Our approach, however, can leverage any expert model, even if pretrained on out-of-task datasets (see the response above, `Fair Comparison with Vanilla Fine-Tuning of Backbone Model`).\", \"Despite the simplicity of our method, our global reward inherently captures lookahead characteristics, which are crucial in offline RL (including MCTS) for avoiding local optima. This aligns with the main contribution of ILQL, which highlights that \\u201cour offline RL method can lead to significant improvements in final performance as compared to such \\u2018single step\\u2019 approaches, particularly when the training data is highly suboptimal for the desired task.\\u201d Additionally, we provide a comparison with the local-optimal (single-step) method, i.e., logitfusion [1], in Table 7.\"], \"reference\": \"[1] Tuning language models by proxy\\n\\n\\nWe hope this pointwise comparison with ILQL addresses your concerns. \\n\\nPlease let us know if you would like a more detailed comparison on other aspects.\"}", "{\"comment\": \"> Overall Results: Although our method fails to exhibit better faithfulness compared with the fine-tuned model expert model, it instead strikes a challenging trade-off between accuracy (CLS) and faithfulness (expert model), which has been discussed in [1]. This is also one of the core motivations: combining the strengths of classification-specialised and rationale-specialised models.\\n\\nThanks for adding the expert model results. In this case, I don't see a clear benefit of the proposed method over the expert models. As for the argument that the proposed method achieves a trade-off between accuracy and faithfulness, one can easily combine CLS and Expert by first predicting label words using CLS then explaining the answer using the global expert model. I feel this should be a strong baseline and would at least perform as well as the proposed method in achieving the balance between accuracy and faithfulness.\"}", "{\"title\": \"Response to reviewer TvLG (1)\", \"comment\": [\"Thank you for your valuable time and your thoughtful feedback! We address each point as follows.\", \"**Writing about method**\", \"Thanks for your thoughtful questions regarding the method details. We have updated the method section (Section 3.3 and Section 3.4) with a newly updated **Algorithm1** that elaborated on the pipeline, addressing your suggestion that *it would be better to describe the method in more detail*.\", \"The **notation $t \\\\wedge T$** represents the minimum of the two values $t$ and $T$. This mathematical convention is common in contexts like stochastic processes and optimisation. Here, it indicates that the product runs from $i = 1$ up to the lesser of $t$ and $T$. This effectively caps the sequence or product based on the minimum value.\", \"**$\\\\mathbb{P}_t\\u200b(s_t\\u200b)$** represents the probability of reaching $s_t$ from the $\\\\mathbb{P}_t$. Specially, $[S_t=s_t]$ is an indicator function that is equal to 1 if the state at $t$ is $s_t$, and 0 otherwise. The numerator inside the expectation represents the product of rewards and the probability of reaching state $s_t$, ensuring that paths leading to high rewards over time are given more weight.\", \"**Rollout in MCTS** is used to estimate the value of future actions, helping navigate and expand the search tree effectively. The Eq.(1) represents a probabilistic reward distribution of state $s_t$. $G(s_t,s_{t+1})$ is analogous to reward function in MCTS, with superiority in estimate the reward via lookahead. In our framework, the global expert model $U^g$ performs reward estimation to the generated policy from backbone model (see `GlobalReward` function).\", \"**Inference efficiency**: we totally agree that inference-time cost is an important consideration in MCTS-like methods. Unlike existing *explicit* MCTS requiring expensive rollouts or simulations to evaluate potential actions, we compute expected rewards in a more integrated way, as shown in the numerator of Eq.(1), streamlining the decoding process. Our cost is similar to beam-search, while detailed comparison results is available in Table 14 (Table13 in original submission).\", \"**Equation about reward calculation**: to ease understand, we remove the Eq.2 and Eq.3, instead, we update function *LocalMask* and *GlobalReward* in **Algorithm 1** in Section 3.3.\", \"Local Reward: We introduce a set of classification label words and we remove the label words which are not included in the expert model\\u2019s prediction. We then renormalise the output probability based on the new vocabulary (See `LocalMask` function in Algorithm 1).\", \"Global Reward: The expert model generate a lookahead reward by evaluating the plausibility of the tuple $(x_t, x_{t+1})$ generated from the backbone model (See `GlobalReward` function in Algorithm 1).\"]}", "{\"comment\": \"Thanks for your response. I read it while I'd like to maintain my score.\\n\\nThanks\"}", "{\"comment\": \"Thank you once again for your detailed feedback and thoughtful comments!\\n\\nWe would like to emphasise that our expert model can select different $x_{t-1}$, based on the $x_{t-1}$, the backbone model which is responsible for trajectory generation will generate/sample differently as the generation probability is $p(|x_{t-1})$. Details in the response to `why search algorithm` above. \\n\\nWe greatly appreciate your valuable feedback, which has been instrumental in refining our work. We will continue improve our paper!\\n\\nThank you!\"}", "{\"comment\": \"The discussion that we can participate in will end soon. Could you kindly confirm whether our responses and the revision in the newly uploaded pdf have appropriately addressed your concerns? If you find that we have properly addressed your concerns, we kindly request that you consider adjusting your initial score accordingly. Please let us know if you have further comments.\\n\\nThank you for your valuable time and effort in reviewing our work.\"}", "{\"title\": \"Response to Reviewer TvLG (3)\", \"comment\": \"**Fair Comparison with Vanilla Fine-Tuning of Backbone Model**\\n\\nThank you for pointing out this critical point. The comparison results are updated in Table 5 and Table 5, where **CLS** is the expert model used to predict the answer and the **expert model** refers to the global expert model is used to generate rationale. As CLS is a classifier that can't generate rationale, we only apply it in acc evaluation, while expert model is applied in both metrics calculation. Below is our analysis:\\n\\n**(a) comparing with fine-tuned backbone model**: \\n We have updated the ablation results in Table 5 and Table 6. For the ASAP dataset, the expert model uses the same backbone as the primary model (i.e., Llama3-8B) and has been fine-tuned on the ASAP training set. This aligns well with your intended comparison with *Vanilla Fine-Tuning of Backbone Model*\\n - **Accuracy Results**: In Table 5, by comparing the expert model with our full framework, we observe overwhelming advantages of our full model across **all four subsets**, especially on Q2, with results showing 68% vs. 48% (ours vs. expert). And the average results are 74\\\\% and 69\\\\% for our (full) and expert model. \\n - **Faithfulness Results**: The expert-only method achieves better faithfulness on three subsets (except for Q4). Interestingly, our full framework (including CLS & expert) behaves better than our+expert model, showing that the synergised effects from CLS and expert. \\n- **Overall Results**: Although our method fails to exhibits better faithfulness compared with the fine-tuned model expert model, it instead strikes a challenging trade-off between accuracy (CLS) and faithfulness (expert model), which has been discussed in [2]. This is also one of the core motivation: combining the strengths of classification-specialised and rationale-specialised models.\\n\\n**(b) compare with smaller fine-tuned models**: \\nFor other datasets where the expert models are Llama2-7B (smaller than our backbone): (i) The comparison between our full (i.e., *our+cls+expert*) and *our+expert* also show that the synergistic effects of combining the two experts. (ii) the faithfulness for our (full) is better than expert model model on SNLI and MNLI, with 15\\\\% vs 13\\\\% and 19\\\\% vs 9\\\\%, respectively. This suggests that our approach not only achieves faithfulness improvements over smaller expert models but also has the potential to leverage weak supervision to unlock the capabilities of larger backbone models.\\n\\n**(c) Generalised to Out-of-Task Expert Model**: \\nNoted that it is not strictly necessary for the expert model to be trained on the exact task dataset. For instance, we experimented with Expert Model 2 (/Weyaxi/Einstein-v2-7B in huggingface), which is trained on general science question-answering instead of ASAP-specific data. Results in Table 12 show that incorporating this out-of-task expert model leads to improved faithfulness on 11 out of 12 metrics. These results validate the generalisability of our method.\\n\\nOverall, our framework demonstrates: (i) superior performance in balancing accuracy and faithfulness when the expert model is of the same size, (ii) improved faithfulness compared to smaller, in-task trained expert models, and (iii) robust generalizability, effectively adapting to scenarios where the expert model has not been specifically trained for the inference task.\\n\\n**Case study**\\n\\nWe appreciate your observation that \\u201csubmergible\\u201d does not necessarily mean the individual is submerged. Our key point here is to emphasise that our method can faithfully respond to meaningful changes in the input (even if the response is imperfect), whereas the backbone model completely ignores the perturbation.\\n\\nFor better clarity, we replace an example shown below (also updated in the revised pdf):\\n``` Perturbed Premise: [frugally Requires free registration.\", \"hypothesis\": \"Does not require free registration.\", \"backbone\": \"entailment; Requires free registration is a necessary condition for only if Requires free registration.\", \"ours\": \"Contradiction; The premise states that the website [frugally] requires free registration, which implies that a user must provide some information or sign.\\n```\\n\\n\\n**References**:\\n\\n[1] The unlocking spell on base LLMs: Rethinking alignment via in-context learning. ICLR2024\\n\\n[2] Question Decomposition Improves the Faithfulness of Model-Generated Reasoning. ICML23\\n\\n[3] On Measuring Faithfulness or Self-consistency of Natural Language Explanations. ACL24\\n\\n[4] Fine-tuning Large Language Models for Domain-specific Machine Translation\\n\\n[5] SciBERT: A Pretrained Language Model for Scientific Text. ACL2019\\n\\n[6] BioBERT: A Pretrained Biomedical Language Representation Model for Biomedical Text Mining. Bioinformatics 2020.\\n\\n[7] Don\\u2019t Stop Pretraining: Adapt Language Models to Domains and Tasks. ACL 2020\\n\\n\\nPlease let us know if you have further concrete questions or concerns that we can address. Thank you for your engagement with our work.\"}", "{\"title\": \"Response to Reviewer TvLG (2)\", \"comment\": \"**Supported literature**\\n\\n**Q1**: Literatures about \\u201cexpert model can generate domain-specific words\\u201d\\n\\n**R1**: We totally agree that evidence from existing literature about this claim will make our argument more supportive. In general, Domain-Specific Language Models are fine-tuned or trained from scratch on domain-specific data, enabling them to comprehend and generate language that reflects the unique terminology, jargon, and linguistic patterns prevalent in that domain. These capabilities lead to better performances in domain-specific tasks, such as machine translation [4], Science[5] text understanding and question answering in biology [6]. More direct evidence in [7] shows that after training on the domain-specific corpus the masked language model loss decreases on 50K randomly sampled held-out documents from each domain, implying a better fit to the domain-specific text.\\n\\nMoreover, we calculated the percentage of generated domain-specific words ourselves and the results on ASAP dataset, i.e., student essay assessment. Sepfically, we use TF-IDF to select the top 200 words from the prompt including question, key elements and rubric, as the domain-specific words. Then, we calculate the percentage of these domain-specific words in the responses from backbone model, expert model and our model (all of them are llama3-8b). \\nThe results over science(Q1, Q2) and biology(Q3, Q4) subjects shown below clearly verify that the expert model responds to the question with more domain-specific words.\\n\\n||Q1|Q2|Q3|Q4|\\n|---|---|---|---|---|\\n|Backbone|14.36%|6.85%|0.07%|1.04%|\\n|Expert|16.88%|19.26%|0.26%|1.12%|\\n\\n**Q2**: line140, the instruction-tuned model generate similar token distribution in [1]\\n\\n**R2**: The original sentence in [1] is \\u201cSurprisingly, we find that base and aligned LLMs (e.g., Llama-2 and Llama-2-chat) typically perform almost identically in most positions in terms of ranking tokens during decoding\\u201d. That is, given the same input context, the token distributions produced by these models are similar at any position in the generated sequence. In their evaluation across 1,000 examples, 92.2% of the tokens overlapped between the base and aligned LLMs. We reference this observation to emphasise that even advanced (instruction-tuned) models face limitations in generating knowledge-intensive words.\\n\\n**Q3**: Faithfulness evaluation based on perturbation\\n\\n**R3**: Yes, the state-of-the-art faithfulness evaluation methods are based on perturbation-based methods. For example,[2] introduces mistakes or biases into the context, while [3] evaluates faithfulness by removing subsets of input tokens. \\n\\n**Novelty**\\n\\nThanks for highlighting the two related works. Both primarily focus on training strategies, whereas our method is implemented during inference. Your insights into the two possibilities are absolutely correct. Our constrained generation framework is specially designed for faithful rational generation, employed a simplified yet more efficient version of MCTS. Despite its simplicity, our contributions are not trivial:\\n- As demonstrated in Table 1, we first identified the importance of domain-specific words in enhancing context adherence. This insight motivated us to increase the generation probability of domain-specific words, leading to the use of a global reward (from the expert model) to improve faithfulness. To the best of our knowledge, this is the first study to improve faithfulness by explicitly encouraging the generation of domain-specific tokens. \\n- Since there is a trade-off between faithfulness and accuracy [2], we incorporate a local reward derived from an expert classification model to enhance accuracy before generating rationales.\\n- From technical perspectives, most MCTS-based methods typically contributes on designing task-specific rewards and efficiently incorporating them into the simulation process. Our method is different from your ILQL in at least two key aspects: (i) we proposed two novel reward mechanisms tailored for the faithfulness problem (ii) our simulation process integrates both the expert model and the backbone model. Our implicit MCTS-framework avoids the explicit rollout process while preserving the core principles of MCTS that (a) estimating node rewards based on lookahead $G_t$. (ii) facilitating a more principled framework to balance exploration and exploitation through the probabilistic expectation, rather than rely on a balance coefficient. \\n\\nAbove all, our contribution extends far beyond simply adapting a weaker version of an existing method to a particular task. Our framework can be applied to other constrained generation tasks, such as personalised generation and diversity-oriented generation, by defining different rewards, while keeping low computation costs.\\n\\nWe have updated the **Contribution in Section 1** and the **Comparison with existing MCTS-like decoding methods** in Section 3.3, line 167-172, to highlight the novelty of our framework.\"}", "{\"comment\": \"Thank you for providing additional results and discussions. Still\\n1. I am not sure if **averaging** the normalized faithfulness is the correct way to compare the proposed method with the baselines. In fact, the expert model is more faithful in 4 out of 7 datasets, and the most contribution to the advantage of the proposed method seems to come from MNLI.\\n2. As for the accuracy, I am not sure why we are comparing the expert model rather than the CLS baseline.\\n3. I still feel CLS + expert model would be a very strong baseline in the setting of this paper.\\n\\nOverall I feel the experiments need to be conducted and analyzed more carefully, and the proposed algorithm needs to be better motivated, scoped, and described in detail. As a result, I think the current paper is not yet ready for publishing, and I would maintain my current evaluation score.\"}", "{\"summary\": \"The work aims to improve the faithfulness of the LLM-generated rationales for reasoning tasks. They propose an inference-based method where an LLM is guided to generate more faithful rationales by both local and global rewards. Both rewards are provided by additional expert models which are trained on the downstream tasks. Experiments demonstrate the effectiveness of the method in achieving higher accuracy and faithfulness.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Faithful rationales are important for explainability and model control, which makes this work well-motivated.\\n2. The proposed method is training-free (although with reliance on trained expert models), making their method portable.\\n3. A comprehensive set of experiments is conducted to showcase the effectiveness of their proposed method.\", \"weaknesses\": \"1. The method requires the model to generate the answer prior to the rationale, which provides no guarantee that the decision is made based on the rationale. The model could still suffer from inherent biases.\\n2. The method is limited to reasoning tasks with constrained answer space, limiting its generalization to more open-ended tasks.\\n3. The method is poorly introduced. It would be very helpful if the authors could explain what exactly Eq.1-3 are doing in plain words.\", \"questions\": \"Could this method generalize to the setting where the rationale is generated before the answer?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer msmv (2)\", \"comment\": \"**Q3: Eq.1-3 are doing in plain words**.\\n\\n**R3**: The equations referenced are primarily derived from the probabilistic framework of the Feynman-Kac model. We have significantly **revised Sections 3.3 and 3.4 in the manuscript**, along with an updated **Algorithm 1** that introduces our method in a step-by-step manner. Please refer to the revised PDF for detailed explanations. \\n\\nAdditionally, we provided a general response to all reviewers, detailing how the two rewards\\u2014local and global\\u2014are calculated (Eq. 2 and Eq. 3). Below is a pointwise response to your proposed questions:\\n- Eq1: In the context of generating tokens $s_t$ using model $f_\\\\theta$, The potential function $G_t$ maps $(s_t, s_{t+1})$ to a non-negative score, analogous to the reward function. The adjusted probability of $f_\\\\theta$ generates $s_t$ is calculated Eq1. $[S_t=s_t]$ is an indicator function that is equal to 1 if the state at $t$ is $s_t$, and 0 otherwise. The numerator inside the expectation represents the product of rewards and the probability of reaching state $s_t$, ensuring that paths leading to high rewards over time are given more weight.} Generation continues until a terminal token or the maximum length of the sequence $T$, i.e., $t \\\\wedge T=\\\\text{min}(t,T)$\\n- Local Reward (Eq2): We introduce a set of classification label words and we remove the label words which are not included in the expert model\\u2019s prediction. We then renormalise the output probability based on the new vocabulary.\\n- Global Reward (Eq3): The expert model generates a lookahead reward by evaluating the plausibility of the tuple $(x_t, x_{t+1})$ generated from the backbone model.\\n\\n\\n**References**\\n\\n[1] Faithfulness tests for natural language explanations. ACL2023\\n\\n[2] Faithful explanations of black-box nlp models using llm-generated counterfactuals. ICLR24\\n\\n[3] Can llms produce faithful explanations for fact-checking? towards faithful explainable fact-checking via multi-agent debate. 2024\\n\\n[4] Confidence-aware learning for deep neural networks. ICML2020\\n\\n\\nPlease let us know if you have further questions or any feedback. Thanks!\"}", "{\"summary\": \"This work proposes an inference-time method to improve the performance and faithfulness of general (instruction-tuned) large language models (LLMs). Specifically, the method uses expert models to provide fine-grained and lookahead rewards to search and reweight possible tokens or continuations proposed by the LLM. With the help of expert models trained on the target task or domain, the proposed method can improve both the accuracy and faithfulness of the zero-shot answers of two instruction-tuned models on three reasoning tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The direction this paper explored has been receiving increasing interest recently: improving the quality of LLM answers at inference time without modifying the model weights directly. The proposed method improves the zero-shot accuracy and faithfulness of two strong general instruction-tuned models (Llama-3-8B and Mistral-7b-Instruct-v0.3) on three reasoning tasks. The experiment showing the benefits of going beyond local/token-level rewards and taking into account the global/lookahead reward is interesting.\", \"weaknesses\": [\"There needs to be more details explaining the proposed method, the motivation of each part, the equations and variables, the relation to related work, and the implementation details. Specifically:\", \"Section 3.3: how does the Feynman-Kac Formulae model inspire the faithfulness-seaking search framework? The connection is not straightforward. The notation of eq 1 is ambiguous. What does posterior P_t(st) mean exactly? How is it used in the proposed method? Also, the equation itself needs more explanations on what it is computing and why in this way.\", \"Section 3.4 (Local constraint): line 179 I find it hard to follow the motivation. How \\\"certain attributes can be implicitly conveyed over longer spans rather than the individual token\\\" is connected to \\\"Instead, domain-specific experts tend to demonstrate better accuracy in knowledge-rich tasks.\\\"? If the domain expert has better accuracy why not just use the expert to predict the scores? Why bother to use them to improve the backbone LLM? In lines 180-181, it says \\\"we introduce a set of classification label words C from these expert models ...\\\", how is C constructed? What is the motivation behind token masking?\", \"Section 3.4 (Lookahead Reweight): Equation 3 is hard to understand without proper explanations. $m$ and $x_i$ are not explained in the texts. $s_{t+l}=s_{t-1}||w_t$ is more confusing: $s_{t+l}$ has $t+l$ tokens while $s_{t-1}||w_t$ has $t$ tokens. What does equality mean here?\", \"Many experimental details are missing, and important experiments are missing.\", \"Missing baselines: the performance and faithfulness of the expert models alone. If the faithfulness or accuracy of the expert models are better than the backbone LLM, why do we even need to use the expert models to improve the backbone LLM?\", \"Evaluation details: how is the original model evaluated? If it is a zero-shot evaluation. What is the exact prompt and task format used? How to extract answers from the outputs to calculate the accuracy? The backbone LLMs are state-of-the-art instruction-tuned models. However, the task performance as well as the faithfulness are quite low, so the authors need to provide more details on the evaluation.\", \"What is the choice of hyperparameter n (number of rollouts) and how is it chosen?\", \"The writing of the paper could be improved for better readability. First, the paper is not properly scoped. For example, in lines 16-18, it says \\\"... to ensure that LLM-generated rationales are logically coherent and comprehensive.\\\" However, there is no result discussing the logical coherence or comprehensiveness of answers in the paper. Another example is line 108: it says \\\"We firstly introduce the faithfulness definition in our context,\\\", but there is no clear definition in section 3.2.\"], \"questions\": \"1. If the expert model is as big as the base model, how can the computational cost be similar to beam search?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer TvLG: Follow-up Questions\", \"comment\": \"As the discussion period is nearing its end, we kindly request your feedback on the comparison with ILQL provided below. We hope it addresses your concerns adequately.\\n\\n*We followed the thread of your feedback and noticed it was mistakenly placed in the \\\"Reviewer msmv\\\" block*. To ensure clarity, we have moved our response here. We apologise for any inconvenience this may have caused and sincerely appreciate your attention to this matter.\\n\\nFirstly, at a high level, both methods share the similarity of leveraging logit perturbation to enhance the output of the backbone model. For clarity, we denote the backbone model as $M_b$ and the expert model as $M_p$. \\n\\nThen, we provide a detailed comparison in the following aspects:\\n\\n**Model Types**\\n- ILQL: Both the backbone model (referred to as the standard language model) and the perturbed model are GPT-2 small.\\n- Ours: Our backbone models are LLaMA3-8B and Mistral-7B, while perturbed models include LLaMA3-8B, LLaMA2-7B, and Mixtral-7B. Importantly, our method supports backbones and perturbed models of differing sizes and tokenisations.\\n\\n**Does it require extra training before deployment?**\\n- ILQL: Yes, there are three types of $M_{p}$: policy generation $\\\\pi_B$, value function model for $Q$ and $V$ generation, and a target value network. They are trained via implicit Q-learning objectives on the exact dataset same as inference tasks.\\n- Ours: No, we can use any $M_{p}$ pretrained on in-task datasets, or even out-of-task datasets (see response above `Fair Comparison with Vanilla Fine-Tuning of Backbone Model`.)\\n\\n**Perturbation strategy**:\\n- ILQL: calibrate the original policy using trained $Q$ and $V$, i.e., $\\\\pi(a \\\\mid h) \\\\propto \\\\pi_\\\\beta(a \\\\mid h) e^{\\\\beta(Q(h, a) - V(h))} = \\\\exp(\\\\log(\\\\pi_\\\\beta(a \\\\mid h)) + \\\\beta(Q(h, a) - V(h))).$\\n- Ours: the step-wise reward is given by $M_p$, and then select the sequence with highest sentence-level reward from generated $K$ sequences (see in Algorithm 1). This reward strategy is inspired by our preliminary study in Figure 1 that $M_p$ prefers in-domain text and could contribute to faithfulness. \\n\\n**Overall analysis**\\n- Our method can fit the situation where $M_b$ and $M_p$ have different but larger backbones. In contrast, ILQL's reliance on training three models for $M_p$ can limit its scalability to larger backbone models, such as fine-tuning a LLaMA3-8B. Our approach, however, can leverage any expert model, even if pretrained on out-of-task datasets (see the response above, `Fair Comparison with Vanilla Fine-Tuning of Backbone Model`).\\n- Despite the simplicity of our method, our global reward inherently captures lookahead characteristics, which are crucial in offline RL (including MCTS) for avoiding local optima. This aligns with the main contribution of ILQL, which highlights that \\u201cour offline RL method can lead to significant improvements in final performance as compared to such \\u2018single step\\u2019 approaches, particularly when the training data is highly suboptimal for the desired task.\\u201d Additionally, we provide a comparison with the local-optimal (single-step) method, i.e., logitfusion [1], in Table 7.\", \"reference\": \"[1] Tuning language models by proxy\\n\\n\\nWe hope this pointwise comparison with ILQL addresses your concerns. \\n\\nPlease let us know if you would like a more detailed comparison on other aspects.\"}", "{\"comment\": \"> Let the backbone model answer directly can also potentially hallucinate. I don't understand why localmasking will hallucinate less than directly use domain-expert to predict the label. As for irrelevant context generation one can simply do as in local masking to restrict the domain-expert's prediction only on the label words for extracting the answer.\\n\\n**R1**: According to the response above. Our method is not equal to let the backbone answer directly, the expert model does affect step-wise generation. \\n\\n**R2**: Localmask is essentially classifier (not for generation task, so that they can't generate label words), and they are more likely to generate highly accurate prediction than generative models, i..e, expert model, as shown in Table 5. Therefore, we do not use the generative model for label prediction.\\n\\n> I don't understand your point. From Table 5 apparently CLS (Expert model) perform better than your method on all datasets? It is also mentioned in the paper \\\"Notably, when incorporating the CLS, our method does not necessarily perform as well as the classifier alone.\\\" So why not just use CLS to predict the label word? Anyway your method uses domain experts, so why not choose the most accurate model for generating the answer? The faithfulness of the rationale can be addressed separately.\\n\\nSorry for the confusion. \\n- First, we would like to clarify that our framework utilizes two types of expert models for local and global rewards, respectively. The local reward model, referred to as CLS in our experiments, is a classifier (not a generative model) and serves as the primary source of accuracy by scoring prediction consistency. The global reward model, referred to as the expert model, is a generative model and primarily contributes to enhancing faithfulness in the generated outputs.\\n- Second, if I understand correctly, the idea is to use two models\\u2014one aimed at accurate answer generation and the other at maintaining faithfulness. We acknowledge that improving both accuracy and faithfulness within a single model remains a key challenge, as emphasized in recent works (e.g., [1]). Moreover, even if we had access to accurate labels and directly fed them into the generative model, this approach would not necessarily guarantee improvements. On the contrary, incorporating predicted results directly as prompts can introduce issues such as ungrounded hallucinations, as highlighted in recent studies [2, 3]. Thus, effectively balancing accuracy and faithfulness continues to be a non-trivial and active area of research.\\n\\nPlease let us know if you have any concerns about our motivation behind or any other concrete questions. \\n\\nThanks again for your patience and valuable time in enaging in our work!\\n\\n**References**\\n\\n[1] Question Decomposition Improves the Faithfulness of Model-Generated Reasoning.\\n\\n[2] Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models. \\n\\n[3] Chain of Natural Language Inference for Reducing Large Language Model Ungrounded Hallucinations\"}", "{\"comment\": \"We've taken your initial feedback about (i) \\\"the generalisability of method\\\" (ii) writing about method into careful consideration and incorporated them into the revised pdf. Could you kindly confirm whether our responses have appropriately addressed your concerns? If you find that we have properly addressed your concerns, we kindly request that you consider adjusting your initial score accordingly. Please let us know if you have further comments.\\n\\nThank you for your time and effort in reviewing our work.\"}", "{\"title\": \"Summary of revision in pdf\", \"comment\": [\"Thank you very much for all the reviewers' feedback. We have explicitly incorporated your initial feedback into our revised PDF (in blue):\", \"Summarised our three-fold **contributions in the Introduction.**\", \"Revision on **method introduction:**\", \"**Algorithm Description In Section 3.3**: we introduced a step-by-step description of our method in Algorithm 1.\", \"**Feynman-Kac Model Explanation in Section 3.3**: we reorganised the introduction of the model and added a more thorough explanation of Eq. (1).\", \"**Local and Global Reward Formulations**: we elaborate how the rewards are calculate by referring to Algorithm 1.\", \"Update the **ablation study results** We updated the performances of vanilla expert models in **Table 6**.\", \"Replace the **case for NLI in Section 5.3 Case Studies.**\", \"Add detailed **evaluation** and **hyper-parameter** in **Appendix A.**\", \"Add **faithfulness evaluation results** on three tasks based on the **Mistral-7B backbone in Appendix B.**\", \"In **Appendix C.1**, we elaborated on how our framework can be generalized to tasks where the **answer space is unconstrained.**\"]}", "{\"title\": \"Response to reviewer MSQh (2)\", \"comment\": \"**Missing comparison with expert model**\\n \\nThank you for pointing out this critical point. The comparison results are updated in Table 5 and Table 6, where **CLS** is the expert model used to predict the answer and the **expert model** refers to the global expert model used to generate rationale. As CLS is a classifier that can't generate rationale, we only apply it in acc evaluation, while the expert model is applied in both metrics calculation. Below is our analysis:\\n \\n**(a) comparing with fine-tuned backbone model**: \\n We have updated the ablation results in Table 5 and Table 6. For the ASAP dataset, the expert model uses the same backbone as the primary model (i.e., Llama3-8B) and has been fine-tuned on the ASAP training set. This aligns well with your intended comparison with *Vanilla Fine-Tuning of Backbone Model*\\n - **Accuracy Results**: In Table 5, by comparing the expert model with our full framework, we observe overwhelming advantages of our full model across **all four subsets**, especially on Q2, with results showing 68% vs. 48% (ours vs. expert). And the average results are 74\\\\% and 69\\\\% for our (full) and expert model. \\n - **Faithfulness Results**: The expert-only method achieves better faithfulness on three subsets (except for Q4). Interestingly, our full framework (including CLS & expert) behaves better than our+expert model, showing the synergised effects of CLS and expert. \\n- **Overall Results**: Although our method fails to exhibit better faithfulness compared with the fine-tuned model expert model, it instead strikes a challenging trade-off between accuracy (CLS) and faithfulness (expert model), which has been discussed in [1]. This is also one of the core motivations: combining the strengths of classification-specialised and rationale-specialised models.\\n \\n**(b) compare with smaller fine-tuned models**: \\nFor other datasets where the expert models are Llama2-7B (smaller than our backbone): (i) The comparison between our full (i.e., *our+cls+expert*) and *our+expert* also shows that the synergistic effects of combining the two experts. (ii) the faithfulness for our (full) is better than the expert model on SNLI and MNLI, with 15\\\\% vs 13\\\\% and 19\\\\% vs 9\\\\%, respectively. This suggests that our approach not only achieves faithfulness improvements over smaller expert models but also has the potential to leverage weak supervision to unlock the capabilities of larger backbone models.\\n \\n**(c) Generalised to Out-of-Task Expert Model**: \\nNote that it is not strictly necessary for the expert model to be trained on the exact task dataset. For instance, we experimented with Expert Model 2 (/Weyaxi/Einstein-v2-7B in huggingface), which is trained on general science question-answering instead of ASAP-specific data. Results in Table 12 show that incorporating this out-of-task expert model leads to improved faithfulness on 11 out of 12 metrics. These results validate the generalisability of our method.\\n \\nOverall, our framework demonstrates (i) superior performance in balancing accuracy and faithfulness when the expert model is of the same size, (ii) improved faithfulness compared to smaller, in-task trained expert models, and (iii) robust generalizability, effectively adapting to scenarios where the expert model has not been specifically trained for the inference task.\\n \\n**Evaluation details**:\\nThanks for asking the question about detailed evaluation, which makes our method clarified. We have updated the experiment setup section in Appendix A, with prompt used in different tasks, and the hyperparameter configuration.\\n \\n**Q1: The exact prompt and task format used** \\nYes, we use zero-shot evaluation. Please refer to the prompt template in Appendix A.\\n \\n**Q2: How to extract answers from the outputs to calculate the accuracy?** \\nThe answer/predicted labels are extracted using specifically designed regular expressions corresponding to our format requirements within our prompt instruction.\\n \\n**Q3: Explanation of the undesirable performance of backbone models** \\nThe backbone model performs lower since those models are not directly trained on our selected datasets. This is also evident in the LLaMA paper [2] as well, which shows that zero-shot inference on TruthfulQA with the LLaMA 7B model has an accuracy of 33\\\\%.\\n \\n**Q4: number of rollouts** \\nWe use a beam of 3 and 10 particles for decoding. Therefore, 30 different rollouts are calculated during decoding.\\n\\n**References**: \\n[1] Question Decomposition Improves the Faithfulness of Model-Generated Reasoning. ICML23 \\n[2] LLaMA: Open and Efficient Foundation Language Models. 2023\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"I have bumped up the score since many of my questions were addressed satisfactory. I am still not very convinced by your discussion on the novelty aspect. I'm not sure if the highlighted difference with ILQL is very significant for example.\"}", "{\"title\": \"Response to Reviewer msmv (1)\", \"comment\": \"Thanks for reviewing our work and we address your concern in the generalisability of our framework.\\n\\n**Q1: Generalise to situations where the rationale is generated before the answer.**\\n\\n*R1:* It is certain that LLMs can generate answers either before or after the rationale. \\n- (i) To prevent scenarios where an overly long rationale causes the answer to exceed the output length limit, we prioritize generating the answer first. This is achieved by providing explicit instructions and demonstrations where the answer precedes the rationale, ensuring the backbone LLM generates the answer at the beginning. \\n- (ii) In our current setting, this approach is motivated by the observation that specialised smaller models often perform better at classification tasks. By leveraging these models for accurate rating, we establish a prior of a likely correct rating, which in turn helps ensure that the rationale generated afterward is more faithful to the truth. \\n- (iii) Through **Algorithm 1** updated in Section 3.3, our incorporation of the function `localmask` for answer prediction is implemented at the first timestep. Meanwhile, the rationale reward is applied throughout the sequence. This design allows for the `localmask` to also be applied at the final timestep, guided either by a length limit or a terminal token indicator. Additionally, we need to evaluate whether feeding the generated rationale back into the classifier via the `localmask` function would degrade or enhance classification accuracy. \\n\\n**Q2: Limited tasks to reasoning.**\\n\\n**R2**: Thank you for raising this critical point. \\n\\nFirstly, we would like to clarify that existing faithfulness evaluations are traditionally based on the assumption that answers can be straightforwardly evaluated as either identical or not. Faithfulness evaluation primarily focuses on determining whether changes in the input lead to corresponding changes in the answer. Evaluating open-ended questions introduces the additional challenge of assessing semantic equivalence, which is not the primary focus of most existing studies on faithful rationales. For example, research in this area often evaluates tasks with clear answer spaces, such as Natural Language Inference (NLI) and multiple-choice QA [1] (both of which are included in our evaluation), as well as sentiment classification [2] and fact-checking tasks framed as binary classification [3].\\n\\nSecondly, our method is extendable to scenarios with an infinite label space $( |\\\\mathcal{C}| = \\\\infty )$, even though the current evaluations are conducted on tasks with a constrained label space $( |\\\\mathcal{C}| = N \\\\in \\\\mathbb{N} )$. For instance, in mathematical problem-solving tasks, the answer could be any arbitrary number. In such cases, the expert model provides a prediction $M$, with its confidence expressed as the probabilities $ w_1$ for $M $ and $w_2 $ (for the second most probable prediction). The ratio $\\\\frac{w_1}{w_2} $ serves as an indicator of the expert's confidence in $M$ [4]. This confidence is then used as a multiplier to enhance the backbone model's prediction for $M$. Finally, the backbone model's transition distribution is renormalised to ensure a valid probability distribution.\"}", "{\"comment\": \"We've taken your initial feedback into careful consideration and incorporated them into our revised Pdf as indicated in the **Summary of Revision**. Could you kindly confirm whether our responses have appropriately addressed your concerns? If you find that we have properly addressed your concerns, we kindly request that you consider adjusting your initial score accordingly. Please let us know if you have further comments.\\n\\nThank you for your time and effort in reviewing our work.\"}", "{\"comment\": \"Thank you for the explanations. I am still not quite convinced that the current section 3 and algorithm 1 describe the proposed method clearly and in detail enough:\\n\\n> In contrast, our global reward mechanism reweights the output probabilities using an expert model, allowing tokens with relatively low probabilities to be sampled while eliminating high-probability tokens. In that way, we could change the selected topk tokens according to the score from expert model. Based on the changes in the output sequence at step t, the backbone model will generate differently at timestep t+1, as the conditional generation depends on the updated conditioned input.\\n\\nI am still not quite sure how this is reflected in Algorithm 1. \\n1. The global reward $\\\\alpha_t^k$ are only used for selecting the output sequence after all the sampling is done, and the global reward does not affect the sampling stage at all. To me this is just using a reward model to weight/score K individually sampled sequence.\\n2. \\\"Based on the changes in the output sequence at step t, the backbone model will generate differently at timestep t+1, as the conditional generation depends on the updated conditioned input.\\\" According to Algorithm 1: $x_{t+1}^k\\\\sim \\\\pi_t(x_{t+1}^k|x_{t+1}^k, f_\\\\theta)$ for $k=1,\\\\dots,K$, the K sequences are sampled individually, so this is just parallel sampling K length T sequences. I don't know how is adjustment-of-probability aspect of Feynman-Kac reflected here. Also, I don't think it is necessary to motivate using an expert model to score parallelly sampled outputs with something as complicated as Feynman-Kac. \\n\\n> Similarly, in our Algorithm 1, at each timestep, the backbone model generates K tokens (where K is the beam search size). These tokens are then evaluated by the expert model. Among the $K\\\\times N$ generated trajectories, only the trajectory with the highest reward is selected. While our search process does not exactly replicate the step-by-step node selection and expansion in MCTS, it generates multiple trajectories in parallel to reduce computational cost and it should still be categorised as a MCTS algorithm, such as the paper [1][2].\\n1. \\\"at each timestep, the backbone model generates K tokens (where K is the beam search size)\\\": each of the $K$ sequence are generated independently, so I am not sure why is K called beam search size.\\n2. \\\"Among the $K\\\\times N$ generated trajectories\\\", if I understand correctly, there are only $K$ length-$T$ trajectories. \\n3. I am not convinced that it is proper to call a method that just scores parallelly sampled outputs as a search algorithm if the reward model does not affect how the trajectories are sampled.\\n\\nNevertheless, I appreciate your effort in addressing my concern and improving the paper.\"}", "{\"comment\": \"> Thanks for adding the expert model results. In this case, I don't see a clear benefit of the proposed method over the expert models. As for the argument that the proposed method achieves a trade-off between accuracy and faithfulness, one can easily combine CLS and Expert by first predicting label words using CLS then explaining the answer using the global expert model. I feel this should be a strong baseline and would at least perform as well as the proposed method in achieving the balance between accuracy and faithfulness.\\n\\nThanks for raising this critical point! Your question about the performance about backbone model inspired us to rethink the evaluation results. As the model are differently sensitive to the text, so the faithfulness metrics vary lot across different datasets, for example, from 0.01 to 0.05 on ASAP, and around 0.1 for NLI task. Consequently, we have provided a normalised faithfulness evaluation table below. The faithfulness score is scaled by the maximum and minimum scores in each task. \\n\\n**Table1: Normalised faithfulness evaluation**\\n| | Backbone | Expert | Our(Full) |\\n|:----------:|:--------:|:------:|:---------:|\\n| ASAP-1 | 0.2920 | 0.8230 | 0.4513 |\\n| ASAP-2 | 0.4425 | 1 | 0.4336 |\\n| ASAP-3 | 0.3628 | 0.5310 | 0.5044 |\\n| ASAP-4 | 0 | 0.4602 | 0.8938 |\\n| SNLI | 0.2000 | 0.4000 | 0.6000 |\\n| MNLI | 0 | 0 | 1 |\\n| TruthfulQA | 0 | 1 | 0.7273 |\\n| Average | 0.1853 | 0.6020 | 0.6586 |\\n\\nOur full method achieves the highest faithfulness score compared to using the backbone or the expert model alone.\\n \\nThen, we provide an average faithfulness-accuracy comparison in Table2.\\n\\n**Table2: Averaged normalised faithfulness-accuracy**\\n\\n| | Normalised Faithfulness | Accuracy | Sum |\\n|:---------:|:-----------------------:|:--------:|:------:|\\n| Backbone | 0.1853 | 0.4171 | 0.6025 |\\n| Expert | 0.6020 | 0.6914 | 1.2935 |\\n| Our(full) | 0.6586 | 0.7400 | 1.3986 |\\n\\nOur(full) achieved the highest in a combination of accuracy and faithfulness evaluation, with clear improvements. \\n\\nFurthermore, we are in the process of implementing the baseline you mentioned\\u2014using the label predicted by CLS as a prompt for the rationale generation model. We will update our results accordingly once this implementation is completed.\\n\\nThank you again for your valuable suggestions. We are truly grateful for your feedback, which has significantly improved the clarity of our paper. We sincerely hope that the adjustments and clarifications we have provided address at least part of your concerns. If so, we kindly request that you consider adjusting your scores to reflect our efforts in responding to your feedback as a positive signal. Also, please let us know if you have further more suggestions!\"}", "{\"comment\": \"I appreciate the authors' effort in addressing my concerns and revising the paper accordingly. Still, most of my concerns remain.\", \"algorithm_1\": \"1. inconsistant notation, what is $f$ and $f_\\\\theta$? Why in local mask sometime $\\\\pi$ sometime $\\\\pi_t$?\\n\\n> We refer to the Feynman-Kac framework, which is a general framework of incorporating a potential function to adjust the original conditional probability. For faithfulness, we observed that the expert model tends to generate more domain-specific tokens, which contribute to context-coherent and faithful rationales. This insight motivated us to employ the expert model\\u2019s conditional generation probability as the potential function to adjust the backbone model\\u2019s conditional generation probability accordingly.\\n\\nI don't see how is this reflected in Algorithm 1. In the while loop all tokens are directly sampled from $\\\\pi_t$ without any adjustment, and looks like the algorithm is just sample K examples and choose the sample with the highest probability according to the expert model's distribution. I don't see how Algorithm 1 is connected to Feynman-Kac model and why is it a search algorithm.\\n\\n> *Firstly*, we would like to emphasise that attributes in language, such as toxicity and faithfulness, can manifest across longer spans of text rather than being confined to explicit attribute-bearing words. This is why we opt not to utilize token-level constraints like logit fusion [1] for addressing the faithful rationale generation problem. Such methods may fall short in capturing the nuanced and distributed nature of these attributes over extended contexts. *Instead*, we align with existing literature highlighting the effectiveness of domain-specific experts in achieving higher accuracy on tasks rich in domain knowledge. For example, task-specific classifiers are known to excel in tasks requiring specialised expertise. This observation inspires our approach of incorporating domain experts for label prediction to enhance both accuracy and faithfulness in rationale generation.\\n\\nI still don't understand why Localmasking would benefit faithfulness. To me it look like a way to improve the accuracy. **So seems like the whole algorithm is to use domain-expert to predict the correct label, then use the large backbone model to generate some candidate explanations, and use another faithful expert to score them.** Also my concern of how the class label words $C$ are chosn still remains. \\n\\n> The primary goal of our framework is to maintain a trade-off between task performance (accuracy) and rationale faithfulness. Incorporating predicted results directly as prompts can sometimes lead to issues like hallucination and irrelevant context generation, as observed in models like ChatGPT [2].\\n\\nLet the backbone model answer directly can also potentially hallucinate. I don't understand why localmasking will hallucinate less than directly use domain-expert to predict the label. As for irrelevant context generation one can simply do as in local masking to restrict the domain-expert's prediction only on the label words for extracting the answer.\\n\\n> (b) From the results in Table 5 and Table 6, it is evident that relying solely on local or global expert guidance fails to strike the required balance. This underscores the importance of our dual-reward framework, which combines both local (answer accuracy) and global (faithful rationale) rewards. Our approach ensures the LLM generates outputs that are not only accurate but also aligned with domain-specific knowledge.\\n\\n1. The definition of the term \\\"CLS\\\" is removed from the revised paper, where it used to be the first sentence of section 5.1. \\n2. I don't understand your point. From Table 5 apparently CLS (Expert model) perform better than your method on all datasets? It is also mentioned in the paper \\\"Notably, when incorporating the CLS, our method does not necessarily perform as well as the classifier alone.\\\" So why not just use CLS to predict the label word? Anyway your method uses domain experts, so why not choose the most accurate model for generating the answer? The faithfulness of the rationale can be addressed separately.\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We thank all reviewers for their valuable feedback and dedicated time! As some of the reviewers are concerned about the method details, here we give a brief explanations about our motivation and method:\\n\\n**Motivation of leveraging the expert model for faithful rationale generation**\\n\\nAs demonstrated in Table 1, we first identified the importance of domain-specific words in enhancing context adherence. This insight motivated us to increase the generation probability of domain-specific words, leading to incorporation of expert model trained on specific domains for faithfulness enhancement. To the best of our knowledge, this is the first study to improve faithfulness by explicitly encouraging the generation of domain-specific tokens. \\n\\n**The advantages of our overall probabilistic inference framework by comparing with other MCTS-like methods**\\n\\nOur faithfulness-seeking model is techinically distinguished in *computation efficiency* and *lookahead rewards*. \\n- Unlike existing explicit MCTS [1,2] requiring expensive rollouts or simulations to evaluate potential actions, it computes expected rewards in a more integrated and efficient way, streamlining the inference process. \\n- The incorporated lookahead is achieved on cumulative rewards across multiple steps, rather than overly prioritising short-term gains or rely on heuristics, e.g., the normalised average that do not model future states effectively.\\n\\n**Reward**\\n\\n**Local reward**: Inspired by the literature that domain-specific experts tend to demonstrate better accuracy in knowledge-rich tasks. Specifically, we introduce a set of classification label words and penalise label words that are not included in the expert model\\u2019s predictions. The output probabilities are then renormalised to ensure validity.\\n\\n**Global reward**: As a faithful rationale requires coherence with the surrounding context. We use a expert model that is trained on domain-specific corpus to provide **lookahead** reward when generating rationales. Specifically, the expert model scores the current state $x_t$ by evaluating the generated $(x_t, x_{t+1})$ from backbone LLMs. It is expected that text spans faithful and coherent with the domain-sensitive context are preferred, as they better align with the expert's fine-tuned distribution.\\n\\n**Contribution and novelty**\\n\\n- We investigate the challenge of faithful rationale generation by highlighting the limitations of general LLMs in producing domain-specific responses. To the best of our knowledge, this is the first study to enhance faithfulness by explicitly encouraging the generation of domain-specific tokens.\\n- We propose two novel reward mechanisms, namely local and lookahead rewards, tailored for the rationale generation problem. These are integrated into a efficient probabilistic inference framework to achieve a trade-off between task accuracy and rationale faithfulness.\\n- Empirical results show both enhancement in accuracy and faithfulness over seven tasks, with an absolute accuracy improvement of 33\\\\% over the seven datasets, along with a 10\\\\% improvements in faithfulness evaluation, while maintaining a computation cost similar to beam search (1.3$\\\\times$).\", \"references\": \"[1] Don\\u2019t throw away your value model! generating more preferable text with value-guided monte-carlo tree search decoding. \\n\\n[2] Pairwise optimization for o1-like olympiad-level mathematical reasoning.\"}", "{\"title\": \"Response to Reviewer 1TJ5\", \"comment\": \"Thank you very much for spending your valuable time in reviewing our work and we address your concerns as follows.\\n\\n**Q1: Missing references**\\n\\n**R1**: The paper \\u201cEvaluating Human Alignment and Model Faithfulness of LLM Rationale\\\" was released after our paper submission, and we appreciate you bringing it to our attention. This paper focuses on faithfulness explanation from human perspectives and provide important insights for future faithful evaluation. We will add to the our related work discussion. \\nWe have read the paper \\u201cOn Measuring Faithfulness or Self-consistency of Natural Language Explanations\\u201d when preparing our paper submission. It introduced a method of calculating faithfulness as a continuous value rather than a binary measure. We didn\\u2019t cite it as we followed more traditional and widely-used methods for faithful evaluation, as outlined in[1,2,3]. We will add a discussion of this faithfulness metric in the related work section.\\n\\n**Q2: Explanation of Figure2**\\n\\n**R2**: For faithfulness, we firstly observed that domain-specific words are important in enhancing context-adherence in Table1, which inspired us to increase the generation probability of domain-specific words to enhance faithfulness of the rationales generated. This motivated the design of our approach using global rewards (expert model). To the best of our knowledge, this is the first time faithfulness has been studied by encouraging domain-specific tokens. \\nIn the experiment, we show the distribution of domain-specific words for both the backbone model and our method, highlighting that our model successfully generates more domain-specific words (indicated by the blue line above the yellow line). \\n\\n**Q3: Experiment on other backbone**\\n\\n**R3**: We have updated the evaluation results on Mistral 7B, shown in Appendix B1. The evaluation results shows that our method can also achieves better results in accuracy and better faithfulness in rationale. \\n\\n**Q4: Experiment on other datasets**\\n\\n**R4**: Thanks for your suggestions. We have tested our method across three distinct tasks, i.e., student essay assessment, natural language inference and question answering, using 7 different datasets. The consistent improvements across these tasks validate the effectiveness of our methods, with an absolute accuracy improvement of 33% across the seven datasets, along with a 10% improvement in faithfulness evaluation. Please kindly let us know if you have any particular datasets you recommend for further evaluation. \\n\\nPlease let us know if you have further concrete questions or concerns that we can address. Thank you for your engagement with our work.\"}", "{\"comment\": \"We've taken your initial feedback into careful consideration and incorporated them into our manuscript as indicated in our response. Could you kindly confirm whether our responses have appropriately addressed your concerns? If you find that we have properly addressed your concerns, we kindly request that you consider adjusting your initial score accordingly. Please let us know if you have further comments.\\n\\nThank you for your time and effort in reviewing our work.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to reviewer MSQh (1)\", \"comment\": [\"Thank you for your valuable time and your detailed feedback! We address each point as follows.\", \"**Explanation of the proposed method**\", \"The equations referenced are primarily derived from the probabilistic framework of the Feynman-Kac model. We have significantly revised Sections 3.3 and 3.4 in the manuscript, along with an updated Algorithm 1 that introduces our method in a step-by-step manner. Please refer to the revised PDF for detailed explanations. Below is a pointwise response to your proposed questions:\", \"**Q1: Motivation of applying Feynman-Kac Formulae in the faithfulness-seaking search framework?**\", \"**R1**: We refer to the Feynman-Kac framework, which is a general framework of incorporating a potential function to adjust the original conditional probability. For faithfulness, we observed that the expert model tends to generate more domain-specific tokens, which contribute to context-coherent and faithful rationales. This insight motivated us to employ the expert model\\u2019s conditional generation probability as the potential function to adjust the backbone model\\u2019s conditional generation probability accordingly.\", \"**Q2: Explanation of Eq1**.\", \"**R2**: $\\\\mathbb{P}_t\\u200b(s_t\\u200b)$ represents the probability of reaching $s_t$ from the $\\\\mathbb{P}_t$. Specially, $[S_t=s_t]$ is an indicator function that is equal to 1 if the state at $t$ is $s_t$, and 0 otherwise. The numerator inside the expectation represents the product of rewards and the probability of reaching state $s_t$, ensuring that paths leading to high rewards over time are given more weight. For better understanding, we connect with concepts in MCTS. **Rollout in MCTS** is used to estimate the value of future actions, helping navigate and expand the search tree effectively. The Eq.(1) represents a probabilistic reward distribution of state $s_t$. $G_t$ is analogous to reward function in MCTS, with superiority in estimate the reward via lookahead. In our framework, the global expert model $U^g$ performs reward estimation to the generated policy from backbone model (see `GlobalReward` function).\", \"**Q3: Explanation of line 179 about motivation of local constraint.**\", \"**R3**: *Firstly*, we would like to emphasise that attributes in language, such as toxicity and faithfulness, can manifest across longer spans of text rather than being confined to explicit attribute-bearing words. This is why we opt not to utilize token-level constraints like logit fusion [1] for addressing the faithful rationale generation problem. Such methods may fall short in capturing the nuanced and distributed nature of these attributes over extended contexts. *Instead*, we align with existing literature highlighting the effectiveness of domain-specific experts in achieving higher accuracy on tasks rich in domain knowledge. For example, task-specific classifiers are known to excel in tasks requiring specialised expertise. This observation inspires our approach of incorporating domain experts for label prediction to enhance both accuracy and faithfulness in rationale generation.\", \"**Q4: why not just use the expert to predict the scores.**\", \"**R4**: Thanks for raising this interesting point! (a) The primary goal of our framework is to maintain a trade-off between task performance (accuracy) and rationale faithfulness. Incorporating predicted results directly as prompts can sometimes lead to issues like hallucination and irrelevant context generation, as observed in models like ChatGPT [2]. (b) From the results in Table 5 and Table 6, it is evident that relying solely on local or global expert guidance fails to strike the required balance. This underscores the importance of our dual-reward framework, which combines both local (answer accuracy) and global (faithful rationale) rewards. Our approach ensures the LLM generates outputs that are not only accurate but also aligned with domain-specific knowledge. (c) Our framework integrates expert predictions in a probabilistic manner, i.e., treating the prediction as a conditional input within the generation process. This will fundamentally guide the rationale generation process and is likely to produce a rationale that is more consistent and faithful to the prediction.\", \"**Q5: local contraint**\", \"**R5**: For better clarity, we remove the Eq2 and Eq3. The calculation of Local reward is as follows: We introduce a set of classification label words $\\\\mathcal{C}$ and we remove the label words which are not included in the expert model\\u2019s prediction $c_0$. We then renormalise the output probability based on the new vocabulary (see function `LocalMask` for details).\"]}", "{\"comment\": \"Dear Reviewer msmv,\\n\\nThe discussion that we can participate in will end soon. Could you kindly confirm whether our responses have appropriately addressed your concerns? If you find that we have properly addressed your concerns, we kindly request that you consider adjusting your initial score accordingly. Please let us know if you have further comments.\\n\\nThank you for your time and effort in reviewing our work.\\n\\nBest, Authors\"}", "{\"summary\": \"They tackle the rationale generation tasks in LLMs' reasoning process. Specifically, they propose a probabilistic inference paradigm that provides fine-grained and lookahead rewards to instruct LLMs to generate good rationale. The key problem addressed is that LLMs often produce unfaithful explanations, especially when they fail to incorporate essential contextual information.\\n\\n+ **Local Reward**: this component ensures coherence with the immediate context, often by using a domain-specific expert model.\\n+ **Global reward**: This assesses the plausibility of the current token in relation to desirable future attributes\\n\\nThe search algorithm, especially for lookahead reweight seems interesting.\\n\\nPlease forgive me if I misunderstand something. I spent much time for reading the paper but to be honest, I am not an expert in this area. I will available on the rebuttal time for author's response and will read their response. I am also open to other reviewers' opinions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a novel probabilistic inference method with a dual-reward mechanism, combining local and global reward. This is a very novel solution.\\n2. The paper is well-written. I am not an expert in this domain but I can get their core contributions. \\n3. The experiment design is clear: they design the ablation study in Section 5.1 to justify the local and global rewards for the final performance. Although I suggest authors could do better by choosing more LLMs in different model size to better support their experimental design.\", \"weaknesses\": \"1. There are several related works that are missing or less discussed:\\n + Evaluating Human Alignment and Model Faithfulness of LLM Rationale\\n + On Measuring Faithfulness or Self-consistency of Natural Language Explanations\\n2. Figure 2 about the distribution of domain-specific words is unclear to me. \\\"showing that our method can respond more actively to those domain-specific words\\\" Why does this part matters to the experimental results.\", \"questions\": \"1. The overall experiments are conducted on LLaMA3. I think more backbone LLMs and other sizes of LLMs are needed to justify the proposed inference paradigm.\\n2. More experiments on more related datasets is needed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
0YXckVo7Kw
MMCOMPOSITION: Revisiting the Compositionality of Pre-trained Vision-Language Models
[ "Hang Hua", "Yunlong Tang", "Ziyun Zeng", "Liangliang Cao", "Zhengyuan Yang", "Hangfeng He", "Chenliang Xu", "Jiebo Luo" ]
The advent of large Vision-Language Models (VLMs) has significantly advanced multimodal understanding, enabling more sophisticated and accurate integration of visual and textual information across various tasks, including image and video captioning, visual question answering, and cross-modal retrieval. Despite VLMs' superior capabilities, researchers lack a comprehensive understanding of their compositionality -- the ability to understand and produce novel combinations of known visual and textual components. Prior benchmarks provide only a relatively rough compositionality evaluation from the perspectives of objects, relations, and attributes while neglecting deeper reasoning about object interactions, counting, and complex compositions. However, compositionality is a critical ability that facilitates coherent reasoning and understanding across modalities for VLMs. To address this limitation, we propose MMCOMPOSITION, a novel human-annotated benchmark for comprehensively and accurately evaluating VLMs' compositionality. Our proposed benchmark serves as a complement to these earlier works. With MMCOMPOSITION, we can quantify and explore the compositionality of the mainstream VLMs. Surprisingly, we find GPT-4o's compositionality inferior to the best open-source model, and we analyze the underlying reasons. Our experimental analysis reveals the limitations of VLMs in fine-grained compositional perception and reasoning, and points to areas for improvement in VLM design and training.
[ "Vision-Language Models", "Compositionality", "Benchmark" ]
https://openreview.net/pdf?id=0YXckVo7Kw
https://openreview.net/forum?id=0YXckVo7Kw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zmevxlvYEM", "uMeBVSoKcT", "sycPJatknj", "s4Hp8jwywV", "qLcaxxKvDK", "puTbbLrlkD", "obW0Z1pc9T", "o0JvNHwuXk", "jjK1RKO1Cj", "jeaSFeJ5Jy", "j3znw7h1mg", "j1xrFWWBfX", "akkmzcotyt", "a2Kv6xA8f9", "VNwyOsYS4Q", "PxjJKiZA6r", "GaGpoGauR1", "BV5dCjpgZj", "A8i0h6PeY7", "9hPhNuDxnn", "7lM1TkkTWd", "5jpJYBOvxq", "3SrPNLlRGP" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732686401122, 1733207163834, 1732445535854, 1732479888250, 1730316127005, 1732290176215, 1737660864425, 1732134689134, 1732342781137, 1732508579587, 1730063786974, 1732134769813, 1732134600251, 1730503915080, 1732554563377, 1732381657160, 1732179526074, 1732479568590, 1730635521733, 1732569271054, 1732134535492, 1732242166748, 1732134737750 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_33Vo" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_SeFC" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_SeFC" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_33Vo" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_Deoo" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_wuw2" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_wuw2" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_Deoo" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_SeFC" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_33Vo" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ], [ "ICLR.cc/2025/Conference/Submission821/Reviewer_33Vo" ], [ "ICLR.cc/2025/Conference/Submission821/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer wuw2,\\n\\nWe believe we have addressed your concerns. If our careful and respectful responses continue to be ignored, we will report this to the ACs and/or PC.\"}", "{\"title\": \"General Response to Reviewers and ACs\", \"comment\": [\"We sincerely thank the reviewers for their thoughtful evaluations and constructive feedback. We are encouraged by the recognition of the strengths of our work, including:\", \"**Key Contributions**:\", \"**R3**: *\\\"MMCOMPOSITION evaluates tasks like multi-image reasoning, object interactions, and counting, all of which are crucial for real-world, nuanced understanding.\\\"*\", \"**R3**: *\\\"Improvement upon Existing Compositional Datasets.\\\"*\", \"**R4**: *\\\"The analysis on model component provides valuable insight on model design.\\\"*\", \"**Benchmark Design**:\", \"**R1**: *\\\"A comprehensive benchmark focused on compositionality, encompassing a wide range of skills.\\\"*\", \"**R2**: *\\\"New benchmarks are always good, human curation is appreciated.\\\"*\", \"**R4**: *\\\"Human-annotated and covers a wide range of tasks in terms of compositional understanding.\\\"*\", \"**R4**: *\\\"The benchmark is challenging and demonstrates a large performance gap between human and VLMs.\\\"*\", \"**Experiments and Analysis**:\", \"**R1**: *\\\"This paper provides an extensive evaluation of recent models.\\\"*\", \"**R2**: *\\\"Large number of models evaluated.\\\"*\", \"**R3**: *\\\"In-depth model comparison and component analysis.\\\"*\", \"**Paper Writing and Organization**:\", \"**R1**: *\\\"The paper is well-written with clearly organized sections.\\\"*\", \"Meanwhile, we would like to raise some concerns regarding the comments from **R2** and **R3**:\", \"**R2 (wuw2)**:\", \"It appears that R2 may not have thoroughly read and understood our paper, and our careful and respectful responses are deliberately ignored.\", \"They missed most of the key points, and the concerns raised by R2 have been addressed in our paper revision.\", \"Moreover, R2 made conclusions based on their \\\"reasoning\\\" rather than evidence. We believe that conclusions should be based on evidence, not merely on reasoning.\", \"**R3 (SeFC)**:\", \"The final comments from R3 are ambiguous and fail to specify concrete issues with our work.\", \"The concerns raised seem unrelated to the core contributions of our paper, instead focusing disproportionately on minor or peripheral details.\", \"Despite multiple requests for clarification, we have not received any further response. This lack of engagement limits our ability to address their concerns effectively.\", \"We believe we have thoroughly addressed all the main concerns that were clearly articulated. It would be unfair to disregard our contributions based on these misunderstandings and ambiguous feedback. Therefore, we respectfully request that the Area Chair investigate these issues to ensure a fair evaluation.\", \"Thank you again for the valuable dedication and for recognizing the significance of our contribution.\"]}", "{\"comment\": \"My initial concerns have been addressed. I would be happy to adjust my score recommendation to 6 if the evaluation code release for MMComposition includes procedures for testing the models and settings reported in the paper, including image-blind settings with both VLMs and LLMs.\"}", "{\"comment\": \"I sincerely appreciate your hard work and effort. However, it appears that the multi-hop questions, which are a key component, were not executed well, particularly when the reasoning is relatively easy. Additionally, creating an in-context test set requires careful thought and attention to detail. That said, I will keep my current score as a reflection of my respect for your dedication.\"}", "{\"summary\": \"The paper \\\"MMCOMPOSITION: Revisiting the Compositionality of Pre-Trained Vision-Language Models\\\" presents MMCOMPOSITION, a new benchmark focused on testing VLMs' ability to handle complex compositional tasks like object interactions, counting, and scene reasoning. With 4,342 annotated questions across 13 categories, the benchmark highlights a clear performance gap between models and humans (67.95% vs. 90.31% accuracy). Results suggest that improving high-resolution encoders, scaling language decoders, and expanding training data are key to better compositional reasoning in VLMs. MMCOMPOSITION offers a practical tool for refining future VLMs to better understand complex compositions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Strengths of MMCOMPOSITION:\\n\\n1. Targeted Evaluation of Compositionality for VLMs: MMCOMPOSITION provides a focused benchmark to assess compositional reasoning in Vision-Language Models, an area where existing models often fall short. By going beyond basic attribute recognition, MMCOMPOSITION evaluates tasks like multi-image reasoning, object interactions, and counting, all of which are crucial for real-world, nuanced understanding.\\n\\n2. Improvement upon Existing Compositional Datasets: This benchmark builds on and enhances data from existing compositional datasets, such as ARO, to create a more diverse and challenging evaluation framework. By curating tasks that move beyond traditional benchmarks, MMCOMPOSITION offers a comprehensive dataset for testing complex visual-language interactions.\\n\\n3. In-Depth Model Comparison and Component Analysis: MMCOMPOSITION evaluates over 50 VLMs across different architectural components, allowing a detailed comparison. This thorough assessment reveals how factors like encoder resolution, decoder size, and training data diversity impact compositional reasoning. It offers practical insights that can guide future improvements in model design.\", \"weaknesses\": \"typos:\\ntable 4 - Relolution \\n\\n1. In-context multimodal compositionality: Adding tests for in-context multimodal compositionality could strengthen the benchmark, as this capability is crucial for real-world applications. Evaluating models' ability to maintain compositional understanding across multi-modal inputs, rather than isolated tasks, could enhance the dataset's relevance.\\n2. Multi-hop compositional problems: The paper would benefit from including multi-hop reasoning tasks, where models must integrate multiple compositional steps to arrive at an answer. This kind of problem is essential for advanced compositionality and would make the benchmark more challenging and comprehensive.\\n3. Questionable novelty: The novelty of the paper could be improved if it incorporated points 1 and 2. Adding in-context multimodal compositionality and multi-hop compositional problems would make MMCOMPOSITION a more distinctive and valuable benchmark.\", \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful feedback! We appreciate the opportunity to analyze the underlying reasons behind the observed phenomena.\", \"q1\": \"In the initial ICL setting, we randomly sampled examples from the dataset to serve as in-context examples, testing with one, two, and three examples. We observed that increasing the number of examples led to a performance decline. We hypothesize that this decrease is due to the significant increase in prompt length caused by the additional examples, which negatively impacts the model's performance. Additionally, the in-context examples might confuse the models, leading to incorrect predictions.\\n\\nIn this response, we employed a dynamic retrieval method that retrieves the most similar QA pairs for each query as in-context examples using Dense Passage Retriever(DPR) [2]. Additionally, we experimented with modified prompt formats inspired by SeedBench, integrating these structured prompts with the retrieved QA pairs. The results of these experiments are presented in the accompanying table, where we find that the dynamic retrieval method improves the model's performance on perception tasks compared to randomly sampled examples. \\n\\nRegarding the probing task performance in this response, since all the questions are indefinite choice, we hypothesize that including context examples may confuse the models about the number of correct answers. This could potentially lead to a decrease in performance, as the models might struggle to determine how many options to select based on the examples provided.\\n\\n\\nIn conclusion, optimizing ICL settings to maximize model performance requires significant effort and careful exploration. Given the complexity and variability of factors influencing ICL effectiveness, we believe it is valuable to leave further investigations and refinements to future work, providing ample space for continued exploration in this area.\\n\\nModel | Perception | Reasoning | Probing | Overall |\\n|-|-|-|-|-|\\nQwen2-VL-72B | 56.53 | 76.39 | 70.26 | 65.24 |\\nQwen2-VL-72B-1example-short-prompt | 62.07 (+5.54) | 71.60 (-4.79) | 42.41 (-27.85) | 63.48 (-1.76) |\\nQwen2-VL-72B-1-random-example (SeedBench Prompt Format) | 62.05 (+5.52) | 70.59 (-5.80) | 49.67 (-20.59) | 63.87(-1.37) |\\nQwen2-VL-72B-1-DPR-retrieval-example (SeedBench Prompt Format) | 62.69 (+6.16) | 68.70 (-7.69) | 46.48 (-23.78) | 63.17 (-2.11) |\", \"q2\": \"The observed results can be attributed to differences in difficulty distribution between the settings. As shown in the table, multi-hop perception contains a higher percentage of hard and super hard questions, while multi-hop reasoning includes a larger proportion of easy questions. Specifically, 38.64% of multi-hop reasoning questions are classified as easy, compared to only 3.72% in non-multi-hop reasoning. Therefore, the overall difficulty of multi-hop reasoning is lower than that of non-multi-hop reasoning. We believe this explains the phenomenon. **In addition, we have updated Figure 16 to include the 13 categories of multi-hop questions for enhanced clarity and comprehensiveness.**\\n\\n| Question | Task | Easy | Medium | Hard | Superhard|All |\\n|-|-|-|-|-|-|-|\\n| Multi-hop | Reasoning | 437 (38.64%) | 245 (21.66%) | 399 (35.28%) | 50 (4.42%) | 1,131 |\\n| | Perception | 3 (0.36%) | 276 (33.29%) | 373 (44.99%) | 177 (21.35%) | 829 |\\n| Non-multi-hop | Reasoning | 17 (3.72%) | 142 (31.07%) | 212 (46.39%) | 86 (18.82%) | 457 |\\n| | Perception | 85 (6.15%) | 322 (23.28%) | 756 (54.66%) | 220 (15.91%) | 1,383 |\\n\\n[2] Karpukhin, Vladimir, et al. \\\"Dense passage retrieval for open-domain question answering.\\\" arXiv preprint arXiv:2004.04906 (2020).\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"We sincerely appreciate your invaluable feedback and the opportunity to address your queries regarding our benchmark.\", \"q1\": \"As described in lines 132\\u2013133 of our paper, our proposed benchmark is a new human-annotated dataset for evaluating VLMs\\u2019 compositionally. **All the question-answer pairs are human-annotated (see lines 270\\u2013272).** We use various datasets with the potential to construct VL compositional QA pairs as our seed data (lines 212\\u2013213). Importantly, **we only use the images from these seed datasets and their initial annotations as prompts to construct new QA pairs**. Therefore, we have introduced new data in this work.\", \"q2\": \"The goal of our work is to provide a comprehensive diagnostic analysis of current VLMs regarding their capability for VL compositional perception and reasoning, serving as a complement to earlier comprehensive benchmarks such as MMBench, MMStar, and MME. Therefore, the significance of our work should not be overlooked. We believe that our contribution is non-trivial and will benefit the community involved in VLM design and training.\", \"q3\": \"We have updated the human subjects in the revised paper, please see Table 2 and Reviewer 33Vo Q3.\", \"q4\": \"We provide the quantitative results of our error analysis in the appendix; please see Figure 13 and Section A.4.\", \"q5\": \"As described in lines 132\\u2013133 of our paper, our dataset is **fully human-annotated**. **All the QA pairs are human-annotated (lines 270-272)**. Therefore, the problem you mentioned is not apply for our paper.\", \"q6\": \"Please refer to Q1 and Q2, we believe our contribution is non trivial and will benefit the community of VLM design and training.\\n\\n\\nIn conclusion, we introduce a new dataset and all the QA pairs in our data are created by human annotators, as **clarified in multiple sections of our paper**. We have carefully verified the properties that distinguish our dataset from previous work. We believe that our contribution is significant and will benefit the VLM community.\"}", "{\"comment\": \"Thank you for your endorsement and valuable suggestions! In response, we have included results for pure language models, specifically GPT-3.5, Qwen2.5-72B-Instruct, and LLaMA-3.1-70B. Please refer to the tables for detailed comparisons and insights. From the tables, we observe that the performance of the pure language models is close to random guessing (30.15%), which underscores the indispensable role of visual information in our dataset.\\n\\nModel | Perception | Reasoning | Probing | Overall |\\n|-|-|-|-|-|\\nGPT-3.5-Turbo | 26.53 | 42.07 | 32.93 | 32.89 |\\nLLaMA-3.1-70B | 36.15 | 35.08 | 26.58 | 34.74 |\\nQwen2.5-72B | 37.16 | 40.49 | 30.76 | 37.70 |\"}", "{\"comment\": \"Thank you for the comment! I have updated the score accordingly.\"}", "{\"summary\": \"The paper proposes MMComposition - a human-annotated benchmark dataset for evaluation of the compositionality of large Vision-language models.\\nThe benchmark contains 4.3K questions in three main dimensions: perception, reasoning and probing which are divided into 13 categories. There are both questions that contain a single image and multiple images. Most questions have a single correct answer. There are 459 questions with indefinite-choice.\\nThe benchmark demonstrates human performance (90.31%) and state-of-the VLMs (best performance of 67.95% among 54 evaluated VLMs). \\nThere is also analysis of impact of VLM architecture factor on the benchmark performance, e.g. visual encoder design, language decoder size, training data volume.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The dataset is human-annotated and covers a wide range of tasks in terms of compositional understanding\\n2) The paper evaluates 54 representative large VLMs including open-source and proprietary ones. The benchmark is challenging and demonstrates large performance gap between human and VLMs. \\n3) The analysis on model component provides valuable insight on model design.\", \"weaknesses\": \"1) The paper categories the questions into 4 difficulty levels based on the performance of 6 open-source models. In Figure 8, it shows that 62.09% of questions in the category \\u201csuperhard\\u201d lead to the average performance on all VLMs below the average level. It would be interesting to analyze what characteristics lead to the different difficulty levels of these questions? This can shed light on how to design difficult questions for the competent VLMs.\\n2) In the evaluation benchmark, for questions that contain multiple images, the images are concatenated into a big collage and fed into the model. Some of the VLMs have multiple-image samples in the training data and can perform VQA with multiple input images. Does it impede the performance of these models to feed the collage into them?\", \"questions\": \"1) In line 254, \\u201cwe select several captions from the dense captions in Visual Genome as the correct options and write the misaligned captions manually for the image\\u201d\\nWhat are the criteria for writing the misaligned captions? In terms of which characteristics do the misaligned captions differ from the original captions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Part 2\", \"comment\": \"Q6: Good point. We provide experimental results comparing models before and after tuning with extra instruction tuning data. Please refer to the Table below. The results indicate that models benefit from fine-tuning with extra related datasets for their compositional perception capability, and we have updated this finding to the revised version. In addition, Table 5 in our paper also compares the performance of the models with and without more data fine-tuning.\\nModel | Perception | Reasoning | Probing | Overall |\\n|-|-|-|-|-|\\nLLaVA1.5-7B|36.51| 47.04 | 30.32 | 39.71 |\\nLLaVA1.5-7B+ShareGPT4V | **38.00**| 45.34| 26.43| **39.46**|\\nLLaVA1.5-13B | 37.23 | 49.75 | 39.32 | 42.03|\\nLLaVA1.5-13B +ShareGPT4V | **40.04**| 47.73| 39.29| **42.77**|\\n*Comparison of LLaVA1.5 and LLaVA1.5 fine-tuned with the ShareGPT4V dataset in MMComposition.\\n\\nIn conclusion, our work aims to provide a comprehensive diagnostic analysis of current VLMs regarding their capability for VL compositional perception and reasoning, serving as a complement to earlier comprehensive benchmarks such as MMBench, MMStar, and MME.\"}", "{\"comment\": \"Thank you for your time, thorough comments, and valuable suggestions. We are pleased that you acknowledged our dataset as an improvement upon existing compositional datasets and recognized our experiments for their in-depth model comparison and component analysis.\", \"q1\": \"Thank you for your suggestion. We have added the experiments results under the ICL setting. From the table we can observe that while introducing context examples into the prompt, models\\u2019 performance decreased in different degrees.\\n\\nModel | Perception | Reasoning | Probing | Overall |\\n|-|-|-|-|-|\\nQwen2-VL-72B | 56.53 | 76.39 | 70.26 | 65.24 |\\nQwen2-VL-72B-1example | 62.19 (+5.66) | 73.30 (-3.09) | 43.94 (-26.32) | 64.32 (-0.92) |\\nQwen2-VL-72B-2example | 63.06 (+6.53) | 70.84 (-5.55) | 46.37 (-23.89) | 64.14 (-1.10) |\\nQwen2-VL-72B-3example | 61.61 (+5.08) | 69.46 (-6.93) | 48.87 (-21.39) | 63.13 (-2.11) |\\nInternVL2-40B | 64.57 | 74.12 | 67.14 | 67.95 |\\nInternVL2-40B-1example | 54.01 (-10.56) | 66.62 (-7.50) | 36.97 (-30.17) | 56.82 (-11.13) |\\nInternVL2-40B-2example | 52.37 (-12.20) | 65.24 (-8.88) | 36.24 (-30.90) | 55.37 (-12.58) |\\nInternVL2-40B-3example | 51.05 (-13.52) | 63.73 (-10.39) | 39.94 (-27.20) | 54.51 (-13.44) |\", \"q2\": \"We have computed the proportion of multi-hop QA pairs in our benchmark, which is **2,459 out of 4,342, amounting to 56.63%**. We compared the models' performance on multi-hop versus non-multi-hop questions, and the results are shown in the table. From these results, we observe that models struggle with multi-hop reasoning tasks. We also provide examples of the multi-hop questions in the revised paper; please see Figure 16.\\n\\nModel | Perception | Reasoning | Probing | Overall |\\n|-|-|-|-|-|\\nInternVL2-40B-non-multi-hop | 74.11 | 66.52 | - | 72.28 |\\nInternVL2-40B-multi-hop | 51.24 | 77.01 | 59.59 | 64.63 |\\nQwen2-VL-72B-non-multi-hop | 55.05 | 69.37 | - | 58.55 |\\nQwen2-VL-72B-multi-hop | 58.91 | 79.22 | 69.57 | 70.22 |\\nVILA-40B-non-multi-hop | 66.29 | 61.49 | - | 65.14 |\\nVILA-40B-multi-hop | 44.58 | 71.62 | 62.16 | 60.25 |\\nGPT-4o-non-multi-hop | 63.19 | 57.77 | - | 61.90 |\\nGPT-4o-multi-hop | 48.51 | 66.76 | 54.65 | 58.03 |\\nLLaVA-1.6-34B-non-multi-hop | 66.14 | 61.27 | - | 64.98 |\\nLLaVA-1.6-34B-multi-hop | 44.20 | 57.91 | 58.17 | 53.09 |\\nGemini-1.5-Pro-non-multi-hop | 55.68 | 46.61 | - | 53.50 |\\nGemini-1.5-Pro-multi-hop | 42.39 | 62.78 | 49.60 | 53.09 |\", \"q3\": \"We have addressed the questions in points 1 and 2, the results are shown in the corresponding tables.\"}", "{\"summary\": \"The paper proposes a new compositional reasoning benchmark that is constructed from existing benchmarks (data collection, lines 211-238) augmented with negative options retrieval by similarity, consensus filtering by several recent LMMs and further human filtering of the resulting visual QA. An extensive evaluation of recent models on the proposed benchmark is performed. Some additional ablations are attempted by grouping models trained on more data, larger LLM decoders, vis. encoder combinations etc. However, those only confirm known facts: larger data, larger decoders, or more encoders are beneficial. Some analysis of failures is provided, albeit only qualitative. Main interesting aspect seems to be a large gap reported between human performance and the models. However, no statistics of the human subjects are provided (eg how many humans were employed, how they were motivated, what was the disagreement between humans, age groups, etc.).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"new benchmarks are always good, human curation is appreciated\", \"large number of models evaluated\"], \"weaknesses\": [\"no new data is introduced, built from existing benchmarks\", \"no surprising conclusions\", \"no statistics for the human subjects\", \"error analysis is only qualitative\", \"dataset construction methodology involving humans could be more interesting - eg. humans could generate questions, red-team models to generate hard negatives etc.\", \"I disagree with Table 1, many benchmarks, including those listed have fine-grained questions, there are benchmarks (eg NVLRv2) involving multiple images, other benchmarks have human filtering, at least a partial subset, the only thing I indeed did not encounter before is \\\"multiple right answers\\\" (indefinite choice) - which could indeed be a contribution of the paper\", \"while benchmark contributions are appreciated, it seems this paper is somewhat below what I would expect from the level of contribution of an ICLR paper\"], \"questions\": \"please see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"humans involved in data curation, without reporting details on that, however I am not sure if this is a real ethical concern here\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"rebuttal response\", \"comment\": \"thank you for your response.\\n1. by no \\\"new data\\\" I was referring to basing on existing datasets for the image and initial metadata source\\n2. there are a bunch of compositional reasoning benchmarks proposed in the past - aro, crepe/sugarcrepe, vl-chcklist, eyes wide shut, and newer ones more recently pushed to arxiv - all of them offer some data collection methodology, most if not all have human filtering partitions. Your work is certainly valuable, yet it would be nice to see some more clear distinction - eg something specific that is only detectable by your benchmark\\n3. Having a resolution gap analysis (higher then 768 vs lower then 768) and verifying average score of 54 models is below average is a good start (btw, 54 models' average lower than chance sounds suspicious as it might be biased towards the weak models, how about the same for strongest-k models? ideally maybe k=1 or k=2?), yet I would expect more in-depth insights from a benchmark paper - detailed analysis on what is difficult to the strongest models in terms of different expected human capabilities, comparing those individually to human performance etc. So please don't be discouraged by my comment, all I am saying is that with more work I believe your efforts would indeed make this a significant tool for the community, I just feel it is not quite there yet.\\n4. \\\"will benefit the community of VLM design and training\\\" - for this to be so, you need to pinpoint more clearly what is that your benchmark currently predicts the community needs to focus on? for example higher resolution handling cannot be it, as this was already quite extensively popularized by the llava team with their extended anyres experiments (pls see one of their blogs), etc\\n\\nIn light of the above reasoning, I still prefer to keep my current score of 5, but encourage the authors to continue working on their benchmark and submit it to a later venue, I just don't think it is yet ready in its current state\"}", "{\"comment\": \"Thanks for the efforts on the additional experiments and clarification. My concerns are addressed.\"}", "{\"comment\": \"Regarding Q1: Could you clarify your In-Context Learning (ICL) setup? It\\u2019s a bit unclear whether the examples you use are specifically chosen to improve the final results (as they should be) or are selected randomly. Could you provide an example of an ICL configuration you\\u2019re using? For inspiration, you could refer to the Seed benchmark, which includes an ICL test. While it doesn\\u2019t explicitly target compositionality, it does assess it indirectly.\", \"for_q2\": \"Thank you! In Table 11, I noticed that degradation occurs only in perception for multi-hop questions, while reasoning remains unaffected. This seems counterintuitive, as multi-hop questions are inherently more challenging and should impact reasoning. Could you revisit this? Why do you think this happens? Could it be an inherent issue in the data creation process? For your multi-hop examples, it might be helpful to provide an example for each topic to better illustrate the setup.\"}", "{\"comment\": \"Thank you for your thoughtful feedback and for considering adjusting the score recommendation. We are pleased that our response has successfully addressed your initial concerns, and your suggestions have significantly helped us improve the quality of our work!\\n\\nWe want to confirm that the example evaluation code for MMComposition is now complete and has been made available through the supplementary materials. This code provides comprehensive procedures for testing the models and settings discussed in our work, including the image-blind settings for both VLMs and LLMs. We believe this fully addresses your concerns regarding reproducibility and transparency. Furthermore, we are actively working on refining and formatting all evaluation codes to support a more comprehensive and robust evaluation framework for MMcomposition. All the resources will be released soon.\\n\\nThank you again for your valuable time and thoughtful review!\"}", "{\"summary\": \"This paper introduces MMComposition, a QA benchmark that evaluates the compositional capabilities of modern vision-language models. MMComposition encompasses a range of tasks, including perception, reasoning, and probing, with multiple subtasks presented in various QA formats: yes/no, multiple-choice, and indefinite-choice. The dataset is curated from numerous existing sources, with QA pairs annotated by humans. Covering 13 distinct vision-language compositionality tasks, this benchmark offers a comprehensive evaluation of both proprietary and open-source vision-language models. The paper also analyzes factors that may influence the compositional abilities of VLMs, such as the resolution of visual encoders, the scale of language decoders, and the volume of training data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper presents a comprehensive benchmark focused on compositionality, encompassing a wide range of skills from perception and reasoning to probing.\", \"This paper provides an extensive evaluation of recent models, including both open-source and API-based models, highlighting areas where they continue to fall short of human capabilities.\", \"The paper is well-written with clearly organized sections.\"], \"weaknesses\": [\"Although the benchmark includes diverse skill sets and QA formats, the specific aspects that pose challenges are not clearly defined. It is also unclear what distinguishes this benchmark from other general QA datasets designed to test modern VLMs for AGI, such as MMMU, MMStar, and tons of similar benchmarks. The paper does not provide comparisons in terms of general capabilities across QA datasets; instead, it focuses on embedding-based benchmarks for comparison, as shown in Table 1. Comparing the scale of evaluation samples, such as the number of images or questions across different benchmarks, would also be valuable.\", \"Related to the first weakness, one might question whether this benchmark is truly challenging. Some compositionality benchmarks or visual QA tasks could potentially be solved using only language models in an image-blind setting, due to language priors, such as coherence, grammar, and clues embedded across answer choices. As specific example, in second example in figure 3, can is often made of metal, such knowledge aids in answering correctly without relying on visual cues. It would be beneficial to examine the proportion of questions that can be solved solely using large language models.\", \"Several essential details are missing regarding the benchmark construction. In the human annotation process, additional information is needed: Who annotated the dataset? How was confidence measured, and how were errors handled in finalizing the annotations? Additionally, it\\u2019s unclear how misaligned captions were manually added in the probing task (line 255). Furthermore, for reporting human performance, what was the process? It would be important to present individual human performance scores for each skill, rather than a single overall score.\", \"The empirical trends concerning the scale of the visual encoder, language decoder, and training data are perhaps not surprising. The paper does not analyze whether these trends are specific to the proposed benchmark or if they also appear in other general visual QA benchmarks. Meanwhile, an additional suggested analysis could explore how the design of the visual connector (e.g., fully connected layer or Q-Former style) and the method of visual token insertion (e.g., tokens input directly into the language model or through cross-attention connections) impact performance of the proposed benchmark.\", \"There are some notable clarity issues, including typographical errors such as 'MuriBench' in line 237 and 'ARC' in line 241. Additionally, there are inconsistencies in publication years for certain cited papers, particularly recent NeurIPS papers like SugarCrepe, which collectively raise concerns about professionalism.\", \"Could fine-tuning VLMs on specific datasets improve performance on MMComposition?\", \"---\"], \"assessment\": \"While the extensive evaluations across VLMs are commendable, the benchmark falls short of expected standards in terms of detailed documentation, verification, and comparisons with other QA benchmarks. Additionally, analyses of the proposed benchmark could be enhanced by comparing observed trends with those from other benchmarks.\", \"questions\": [\"The reasoning behind the name 'MMComposition' is unclear.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your feedback. We are pleased to address your remaining concerns.\\n\\nQ1. There are no restrictions against using existing data as seed data for dataset construction. Many new datasets, such as RefCoCo, Visual Genome, LLaVA Bench, and MMVP, leverage existing data sources in this way and are well-accepted as significant tools for VLM evaluation and diagnosis. Additionally, you recognize our dataset as \\\"a new benchmark\\\" in the Strengths section. We believe this point is unrelated to the core contributions of our paper and seems to focus excessively on minor or irrelevant details.\\n\\n\\nQ2. We have included the main differences between our work and existing benchmarks. **Table 1 and Section 1 clearly outline the novelty and distinct aspects of MMComposition compared to existing works.**\\u00a0 Reviewer 1 has recognized the novelty and distinct aspects of our work, please also refer to R1 Q1, where we have successfully addressed this concern for them.\\n\\nQ3. The resolution gap is a significant factor that may influence a model's capabilities. It is reasonable that models limited to processing low-resolution images exhibit poorer compositionality. Therefore, your statement that \\\"it might be biased towards the weak models\\\" is consistent with intuition. We have analyzed the impact of encoding resolution on models' performance in Section A.2 for the top three models (k=3). **Thus, your concern regarding \\u2018the same for strongest-k models? ideally maybe k=1 or k=2\\u2019 has been addressed in our paper.** In addition, our paper includes an in-depth analysis of the strongest models in terms of different expected human capabilities\\u2014these models include GPT-4o, Qwen2-VL, and InternVL. Furthermore, our work provides an in-depth analysis of the relationship between the design of VLMs and their compositionality. As the first study to comprehensively examine complex compositional perception and reasoning, it distinguishes itself from previous works such as ARO, Crepe, and SugarCrepe. **We want to emphasize that this contribution should not be overlooked.** \\n\\nQ4. Our paper clearly \\\"predicts the community needs to focus on\\\". In Section 1 (lines 106\\u2013126), we have explicitly addressed this by highlighting: **\\\"(1) Visual Encoder Design: While a mixture-of-encoder architecture can enhance compositionality, adding more encoders does not necessarily improve performance.\\\" ... \\\"we find that for relatively simple QA tasks, only a small portion of its language capabilities are utilized ... Once the language decoder size reaches a certain threshold (e.g., 34B, 70B), the visual encoder has a more significant impact on the model\\u2019s compositionality. \\\")**. We believe these points provide clear guidance on where the community's efforts could be most effectively directed.\\n\\nIn contrast, the LlaVA team's analysis **primarily identifies the phenomenon but does not delve into the underlying reasons. Furthermore, prior works lack a comprehensive analysis of why models fail at fine-grained visual compositional perception and reasoning. In our paper, we thoroughly examine these underlying reasons in Sections 1, 5, and A.2.** Therefore, we believe this concern has already been addressed in our work.\\n\\n\\nIn conclusion, we believe that conclusions should be based on **evidence**, not merely on **\\u201d reasoning.\\u201d** The concerns you mentioned have been thoroughly addressed in our work, and some of the \\\"issues\\\" you raised have, in fact, been recognized as strengths by other reviewers. Therefore, we respectfully request a reconsideration of the evaluation of our paper.\"}", "{\"comment\": \"Thanks for your constructive suggestions. Your endorsement of our dataset and experiments gives us significant encouragement.\", \"q1\": \"Thank you for your suggestion! We will analyze the patterns of super hard questions for shed light on how to design difficult questions for the competent VLMs in the revised paper.\", \"q2\": \"We conducted experiments to compare different formats of input images, including combining multiple images into a single 'super image' and feeding the models with a list of images. The results, shown in the table, demonstrate that these formats have varying impacts on different models' performance. Specifically, providing a list of images improved the performance of Qwen2-VL-72B, but led to a decrease in performance for InternVL2-40B.\\n\\nModel | Perception | Reasoning | Probing | Overall |\\n|-|-|-|-|-|\\nQwen2-VL-72B | 55.36 | 77.17 | 89.86 | 71.75 |\\nQwen2-VL-72B-multi | 63.01 (+7.65) | 80.35 (+3.18) | 89.19 (-0.67) | 75.89 (+4.14) |\\nInternVL2-40B | 42.35 | 73.27 | 88.51 | 65.26 |\\nInternVL2-40B-multi | 39.29 (-3.06) | 72.54 (-0.73) | 86.49 (-2.02) | 63.64 (-1.62) |\", \"q3\": \"We follow the method proposed in FineMatch [1], which adopts the criteria of replacing attribute, relation, and object phrases while maintaining the Part of Speech (POS) tags unchanged. This approach keeps the mismatched captions as similar as possible at the character level to the original correct captions.\\n\\n[1] \\\"FineMatch: Aspect-Based Fine-Grained Image and Text Mismatch Detection and Correction.\\\" ECCV, 2024.\"}", "{\"comment\": \"From the authors' response, the work appears to be improved, particularly in its comparison with previous QA benchmarks, its analysis of the benchmark's challenges (e.g., the image-blind setting), and the additional exploration of connector design and fine-tuning approach.\\n\\nRegarding the image-blind setting, I am curious whether pure language-based LLMs, such as LLaMA-3.1, GPT-3.5, or others, can perform well on the proposed benchmark in the absence of the provided image.\"}", "{\"comment\": \"Thank you for the time, thorough comments, and nice suggestions. We hope our response can adequately address your concerns.\", \"q1\": \"We have included a comparison of our benchmark with other general benchmarks, as shown in the table. Our benchmark stands out from well-known benchmarks due to its multi-hop QA pairs, its specific capability assessment -- compositionality -- and its challenging nature. This comparison has also been included in the revised paper (see Table 8).\\n\\nDataset | Size | Human Annotation | Multi-Hop | Capabilities | Best Performance (Model/Human) |\\n|-|-|-|-|-|-|\\nMMBench | 3,217 | \\u2717 | \\u2717 | Comprehensive | 86.1 / - |\\nMME | 2,800 | \\u2713 | \\u2717 | Comprehensive | 1790.04 / - |\\nMMStar | 1,500 | \\u2713 | \\u2717 | Comprehensive | 66.0 / - |\\nSeedBench | 19k | \\u2713 | \\u2717 | Comprehensive | 72.4 / - |\\nMMMU | 11.5k | \\u2713 | \\u2717 | College-Level Subject Knowledge | 69.1 / 88.6 |\\nHalBench | 1,129 | \\u2713 | \\u2717 | Hallucination | 67.58 / - |\\n**MMComposition (ours)** | 4,342 | \\u2713 | \\u2713 | **Compositionality** | 67.95 / 90.31 |\", \"q2\": \"To verify the challenging nature of our dataset and demonstrate the indispensable role of images, we conducted experiments comparing the models' performance between the standard setting and an image-blind setting. As shown in the table below, without image input, the models' performance decreases significantly, indicating that they must rely on image compositional information to obtain the correct answers. This result has also been included in the revised paper (see Table 10).\", \"image_blind_setting\": \"|Model|Perception|Reasoning|Probing|Overall|\\n|-|-|-|-|-|\\n|Qwen2-VL-72B|56.53|76.39|70.26|65.24|\\n|Qwen2-VL-72B-blind|45.16 (-11.37)|48.17 (-28.22)|30.76 (-39.50)|44.74 (-20.50)|\\n|InternVL2-26B|60.40|70.03|52.43|63.08|\\n|InternVL2-26B-blind|34.80 (\\u221225.60)|42.63 (\\u221227.40)|32.17 (\\u221220.26)|37.39 (\\u221225.69)|\\n|InternVL2-40B|64.57|74.12|67.14|67.95|\\n|InternVL2-40B-blind|37.88 (\\u221226.69)|43.35 (\\u221230.77 )|34.28 (\\u221232.86)|39.54 (\\u221228.41)|\\n|InternVL2-76B|63.41|75.44|58.46|67.28|\\n|InternVL2-76B-blind|33.93 (\\u221229.48)|44.08 (\\u221231.36)|32.68 (\\u221225.78 )|37.51 (\\u221229.77)|\", \"q3\": \"The data was initially annotated by student workers and then verified by another group of workers; finally, the dataset was refined and finalized by the authors. We followed the method proposed in FineMatch [1], which involves replacing attribute, relation, and object phrases while maintaining the Part of Speech (POS) tags unchanged. This approach ensures that mismatched captions remain as similar as possible at the character level to the original correct captions. We have updated the human performance metrics for each task in **Table 2 of the revised paper**.\\n[1] \\\"FineMatch: Aspect-Based Fine-Grained Image and Text Mismatch Detection and Correction.\\\" ECCV, 2024.\", \"q4\": \"We have clarified in our paper (lines 122\\u2013125) that the visual encoder plays a more significant role in the compositionality of VLMs. Models with enhanced capabilities for perceiving fine-grained compositional image information can provide more detailed inputs to language models. Moreover, we have added a comparison of different visual connectors, including Q-Formers and MLP models, with the results shown in the Table below. From these results, we conclude that the Q-Former architecture cannot provide detailed visual references to language models for fine-grained compositional image understanding. This result has also been included in the revised paper (see Table 9).\\nModel | Visual Encoder | LLM | V2L Adapter | Perception | Reasoning | Probing | Overall |\\n|-|-|-|-|-|-|-|-|\\nmPLUG-Owl2 | ViT-L/14 | LLaMA2-7B | Q-Former | 36.90 | 46.16 | 30.36 | 39.59 |\\nInstructBLIP-7B | ViT-G/14 | Vicuna-7B | Q-Former | 33.22 | 43.70 | 31.41 | 36.86 |\\nLLaVA1.5-7B | ViT-L/14 | Vicuna-7B | MLP | 36.51 | 47.04 | 30.32 | 39.71 |\\nInstructBLIP-13B | ViT-G/14 | Vicuna-13B | Q-Former | 35.53 | 42.70 | 25.24 | 37.06 |\\nLLaVA1.5-13B | ViT-L/14 | Vicuna-13B | MLP | 37.23 | 49.75 | 39.32 | 42.03 |\", \"q5\": \"Thank you for highlighting this issue. We have addressed it in the revised version.\", \"title\": \"Part 1\"}" ] }
0Xt7uT04cQ
Uni-Sign: Toward Unified Sign Language Understanding at Scale
[ "Zecheng Li", "Wengang Zhou", "Weichao Zhao", "Kepeng Wu", "Hezhen Hu", "Houqiang Li" ]
Sign language pre-training has gained increasing attention for its ability to enhance performance across various sign language understanding (SLU) tasks. However, existing methods often suffer from a gap between pre-training and fine-tuning, leading to suboptimal results. To address this, we propose Uni-Sign, a unified pre-training framework that eliminates the gap between pre-training and downstream SLU tasks through a large-scale generative pre-training strategy and a novel fine-tuning paradigm. First, we introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing 1,985 hours of video paired with textual annotations, which enables effective large-scale pre-training. Second, Uni-Sign unifies SLU tasks by treating downstream tasks as a single sign language translation (SLT) task during fine-tuning, ensuring seamless knowledge transfer between pre-training and fine-tuning. Furthermore, we incorporate a prior-guided fusion (PGF) module and a score-aware sampling strategy to efficiently fuse pose and RGB information, addressing keypoint inaccuracies and improving computational efficiency. Extensive experiments across multiple SLU benchmarks demonstrate that Uni-Sign achieves state-of-the-art performance across multiple downstream SLU tasks. Dataset and code are available at github.com/ZechengLi19/Uni-Sign.
[ "Sign language understanding", "Pre-training", "Large-scale sign language dataset" ]
Accept (Poster)
https://openreview.net/pdf?id=0Xt7uT04cQ
https://openreview.net/forum?id=0Xt7uT04cQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qvbLHY9J6y", "TBfyUIVOCG", "Sz5lY2TjzG", "NcV7NLGo2j", "NSTJs6l3pz", "D7Wcy9lMqF", "5hNraPWRJ3" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "decision", "official_review", "meta_review" ], "note_created": [ 1730283319499, 1730681377243, 1730627466880, 1730816580363, 1737523713844, 1730517425436, 1734347610817 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5563/Reviewer_pQAF" ], [ "ICLR.cc/2025/Conference/Submission5563/Reviewer_y6Ev" ], [ "ICLR.cc/2025/Conference/Submission5563/Reviewer_CGke" ], [ "ICLR.cc/2025/Conference/Submission5563/Reviewer_w7tC" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5563/Reviewer_3Yy5" ], [ "ICLR.cc/2025/Conference/Submission5563/Area_Chair_z6oR" ] ], "structured_content_str": [ "{\"summary\": \"This paper has two main contributions. 1) A Uni-Sign method for tackling the three sign language understanding tasks in a unified manner. The model first pre-trains on a large sign language dataset via language modeling, then is fine-tuned on each of the individual tasks separately. 2) A CSL-News dataset, which is a large-scale Chinese Sign Language dataset. Some other minor architectural designs are also proposed. Overall, the proposed method performs quite well across the three sign language understanding tasks, and particularly performs well in Sign Language Translation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Developing a unified approach to handle the various sign language understanding tasks is meaningful. In some sense, the work extends some recent LLM-based sign language understanding works by including the aspect of unifying across the sign language understanding tasks.\\n\\nThe authors introduce a new large-scale sign language dataset for Chinese Sign Language. This dataset could be quite useful for further progress in the field.\\n\\n\\nThe experiment results are quite impressive, especially on the gloss-free SLT task. In my opinion, gloss-free SLT is the setting that is the closest to real applications, so this is quite good.\", \"weaknesses\": \"The proposed method is not very novel. The proposed pre-training approach is to train the model in a language modelling manner, while also using visual features from the sign videos. Then, for the fine-tuning, the language modelling loss is again used for the various tasks. There are some minor contributions, such as a prior-guided fusion module and a score-aware sampling strategy, but these do not seem quite so substantial.\\n\\nI think that in the related works discussion, there should be a part discussing some other works in other fields employing language modelling (or sequence modeling) for tackling various tasks in a unified manner. For instance, this has been done for image-based tasks, and may have also been done for pose-based tasks. This will give the reader a better understanding of the developments of the \\u201cunifying via language modeling\\u201d paradigm.\\n\\n\\nMore specific concerns and questions are in the \\u201cQuestions\\u201d section.\", \"questions\": \"In Table 6, the performance of the proposed method is somewhat lower than the existing baseline SSVP-SLT. Although it is not a very big issue to me, but I would like to know more about it. Why is this the only (rather large) SLT dataset where the proposed method achieves sub-optimal results?\\n\\nThe ablation results shown in Table 7 are rather strange as compared to Tables 8-10, because the settings are different. Table 7 runs experiments on ISLR and CSLR, Tables 8-10 run experiments on CSL-Daily for CSLR and SLT. Why are these different? Moreover, Table 7 and 8 are run in the pose-only setting while Tables 9 and 10 are in the RGB-Pose setting, why should this be the case?\\n Furthermore, some of the more important experiments (Table 7 and 8 in my opinion) should be evaluated on all three different sign language understanding tasks.\\n\\n\\nWhat is the impact of the pre-training? This crucial aspect as not been evaluated properly. For instance, what if the model is trained only using the fine-tuning stage (Stage 3), but for a longer time (i.e., matching the overall training time of the pre-train then fine-tune approach)? How does this affect the performance? This is important as it shows us the benefits of pre-training. Although some results have been provided in table 7, the results and implications are not clear to me. Furthermore, the task-specific training settings and details have not been mentioned.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes Uni-Sign, a unified pre-training framework for sign language understanding (SLU) tasks, addressing the challenges in existing methods that struggle with transferring knowledge across different tasks. The framework uses a new large-scale dataset, CSL-News, which contains 1,985 hours of Chinese Sign Language (CSL) videos paired with textual annotations. Extensive experiments demonstrate that Uni-Sign achieves state-of-the-art performance across multiple SLU benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Uni-Sign effectively unifies multiple SLU tasks, such as isolated sign language recognition (ISLR), continuous sign language recognition (CSLR), and sign language translation (SLT), under a single framework.\\n2. The introduction of CSL-News, a substantial CSL dataset, provides a significant resource for the SLU field and addresses the limitations of prior smaller datasets.\", \"weaknesses\": \"1. Compared to other datasets, what unique advantages or characteristics does the proposed CSL-News dataset offer besides its longer duration? Additionally, why is the vocabulary size relatively limited, and could the restricted language variety impact pre-training effectiveness?\\n2. In the comparisons of downstream tasks in Section 4.3, did other methods also use the CSL-News dataset for pre-training? If not, does this raise any concerns about fairness in the comparisons?\\n3. In the comparative experiments, while high-performing results are analyzed, the reasons behind lower performance should also be provided, such as in Tables 4 and 6.\\n4. In Tables 3 to 6, what would the results of Uni-Sign be if it used only RGB video?\\n5. How do the computational costs, inference time, and memory usage of the proposed model compare to other methods? Does Uni-Sign maintain a competitive advantage in these aspects?\\n6. The manuscript includes numerous comparative results, but it lacks visualizations to intuitively demonstrate the model\\u2019s effectiveness. More visual presentations for each downstream task are recommended.\", \"questions\": \"Please refer to the Weakness section above. If the authors can address these concerns, I would consider raising the rating.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents Uni-Sign, a novel framework for Sign Language Understanding (SLU) that leverages large-scale generative pre-training and a unified fine-tuning paradigm. The paper presents a well-motivated and well-executed approach to SLU. The introduction of the CSL-News dataset and the innovative Uni-Sign framework are significant contributions to the field, demonstrating state-of-the-art performance across various SLU tasks. The paper is well-written and clearly explains the proposed methodology and experimental results. The authors make several notable contributions:\\n\\u2022Introduction of CSL-News: The authors introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset comprising 1,985 hours of video-text pairs. This dataset significantly surpasses existing CSL datasets in size and diversity\\n\\u2022Unified Pre-Training and Fine-Tuning: During fine-tuning, it treats downstream SLU tasks, such as isolated sign language recognition (ISLR), continuous sign language recognition (CSLR), and sign language translation (SLT), as a single SLT task. This unified approach facilitates seamless knowledge transfer and eliminates the need for task-specific fine-tuning methods.\\n\\u2022Prior-Guided Fusion (PGF) Module: To address the limitations of inaccurate keypoints, the authors propose a PGF module that fuses pose and RGB information using keypoint coordinates as priors. \\n\\u2022Score-Aware Sampling Strategy: The authors introduce a score-aware sampling strategy to improve computational efficiency. \\n\\u2022Comprehensive Evaluation: The paper includes a comprehensive evaluation of Uni-Sign across various SLU benchmarks, demonstrating its superior performance in ISLR, CSLR, and SLT tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality:\\n1. The paper presents Uni-Sign, a novel unified pre-training framework for Sign Language Understanding (SLU) that bridges the gap between pre-training and downstream tasks by treating them as a single Sign Language Translation (SLT) task during fine-tuning. This approach deviates from previous methods that relied on indirect pretext tasks or were limited by data scale and transfer capability\\n2. The authors introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing 1,985 hours of video with text annotations, considerably larger than existing CSL datasets. his dataset enables effective large-scale pre-training, addressing a gap in CSL resources compared to American Sign Language (ASL) and British Sign Language (BSL)\\n3. The paper proposes a Prior-Guided Fusion (PGF) module that utilizes keypoint coordinates as priors to model fine-grained spatial consistency between pose and RGB modalities, going beyond simple spatial-temporal fusion techniques. This approach addresses the representational gap between modalities and leverages keypoints to enhance accuracy. \\n4. A score-aware sampling strategy is introduced to address the computational challenges of RGB-pose fusion by selectively choosing RGB frames corresponding to low-confidence keypoints, balancing performance with speed\", \"quality\": \"1. The paper is well-written and presents a clear and comprehensive methodology. The authors provide detailed descriptions of their approach, including data curation, pre-training and fine-tuning strategies, and multi-modal fusion techniques\\n2. The ablation studies thoroughly investigate the contribution of each key component, offering insights into the model's performance and the impact of design choices\\n3. Quantitative results show that Uni-Sign surpasses previous state-of-the-art methods on multiple benchmarks, including significant improvements in BLEU4 scores for SLT tasks\", \"clarity\": \"1. The paper is well-organized and easy to follow.\\n2. Figures and tables effectively illustrate the framework, data distribution, and experimental results\\n3. Mathematical notations and equations are clearly defined and explained\\n4. Qualitative translation examples provide further insights into the model's capabilities\", \"significance\": \"1. The introduction of the CSL-News dataset addresses a significant need for large-scale CSL resources, potentially fostering advancements in CSL research\\n2. The unified pre-training and fine-tuning framework with a generative approach demonstrates a promising direction for improving SLU performance, particularly for SLT tasks\\n3. The proposed PGF module and score-aware sampling strategy offer effective solutions for multi-modal fusion and computational efficiency, potentially benefiting future SLU research\\n4. The paper's findings have implications for advancing sign language technologies, promoting accessibility and communication for the Deaf/Hard of Hearing community\\n5. The authors' commitment to open-sourcing the code and dataset further contributes to the significance of the work, facilitating reproducibility and future research in SLU\", \"weaknesses\": \"1. Discussion on Computational Complexity: While the authors introduce a score-aware sampling strategy to improve efficiency, a more in-depth discussion on the computational complexity of Uni-Sign would be beneficial. This could include analyzing the trade-offs between accuracy and computational cost for different sampling probabilities and exploring potential optimizations.\\n2. Further Analysis of CSL-News: While the paper describes the creation of CSL-News, further analysis of the dataset's characteristics, such as vocabulary distribution and linguistic complexity, would be valuable. This would provide a more comprehensive understanding of the dataset's potential and limitations.\\n3. Cross-Dataset Generalization: Evaluating Uni-Sign's performance on unseen sign language datasets would demonstrate its generalization capabilities. This could involve fine-tuning the pre-trained model on a different CSL dataset or even a dataset from another sign language, like American Sign Language (ASL). Successful cross-dataset generalization would highlight the robustness of the learned representations and the effectiveness of the unified approach.\\n4. Analysis of Error Patterns: A qualitative analysis of the translation errors made by Uni-Sign would provide valuable insights into its limitations and potential areas for improvement. This could involve categorizing errors based on linguistic features, such as sentence complexity, sign ambiguity, or finger-spelling. Identifying common error patterns could guide future research directions.\\n5. Exploration of Multi-Signer Scenarios: The authors mention their interest in exploring SLU tasks in complex scenarios, such as multi-signer situations. Including preliminary experiments or discussions on adapting Uni-Sign to handle such scenarios would further enhance the paper's impact and contribution to the field.\", \"questions\": \"The paper in general addressed the ideas and motivations it introduces. The following question will help add more comprehensive understanding.\\nGeneralization and Applicability\\n1. Multilingual Evaluation: The sources primarily focus on CSL and ASL. Could the authors comment on the applicability of Uni-Sign to other sign languages? How might the model's architecture and pre-training strategies need to be adapted for multilingual SLU? This is important to assess the generalizability of Uni-Sign and its potential impact on a broader range of sign language communities\\n2. Multi-signer Scenarios: How well does Uni-Sign perform in situations involving multiple signers? What challenges might arise in such scenarios, and how could the model be modified to handle them effectively? Addressing this question would provide a more realistic assessment of Uni-Sign's capabilities in real-world applications where multiple signers may be present\\n\\nComparison and Analysis\\n1. Comparison with LLM-based SLT Methods: Recent studies like Sign2GPT and Sign-LLM have explored the use of LLMs for gloss-free SLT. Could the authors provide a comparative analysis of Uni-Sign against these LLM-based approaches? This would help clarify Uni-Sign's contributions and position it within the broader landscape of SLT research\\n2. In-depth Analysis of the Unified Fine-tuning Paradigm: How does the shared objective function influence the performance of individual tasks like ISLR and CSLR? Are there any potential task-specific adaptations that could be incorporated within the unified framework to further optimize performance? This analysis would provide a more nuanced understanding of the paradigm's strengths and weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a new pre-training framework that bridges the gap between pre-training and downstream sign language understanding tasks through a large-scale generative pre-training strategy and a novel fine-tuning paradigm that achieves impressive performance in multiple benchmark tests.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The Uni-Sign framework proposed by the authors utilizes a large-scale generative pre-training strategy and a novel fine-tuning paradigm to bridge the gap between pre-training and downstream sign language understanding tasks in traditional approaches.\\n\\n2. The Uni-Sign framework achieves significant performance gains on both sign language recognition and translation tasks, and experiments are conducted on multiple datasets.\\n\\n3. The related work of paper is adequate, investigating research on sign language tasks including pre-training strategies, dataset development, and so on, from a variety of perspectives.\", \"weaknesses\": \"1. The paper is not clear and detailed enough to explain the score-aware sampling strategy, and does not give a detailed analysis of the process or a corresponding explanation in Figure 5, which could lead to potential misunderstandings or errors.\\n\\n2. The author omitted experimental results on several widely used datasets, such as Phoenix14, Phoenix14T, USTC-SLR 500, USTC-CSL100, etc.\\n\\n3. As shown in Tables 4 and 6, the proposed Uni-Sign method does not achieve the best performance on multiple datasets of continuous sign language recognition and sign language translation. It even performs worse when more modalities are introduced, which makes me worried about the performance of this work.\\n\\n4. The number of parameters of the model is not mentioned in the paper. This feedback highlights the importance of including these key performance metrics, as they are critical for evaluating the practicality of the model.\\n\\n5. It is recommended that the authors make font color changes for the tables throughout the article, due to the large amount of experimental data, while bolding may mislead the reader, especially for Tables 3 through 6.\", \"questions\": \"1. Although there is a difference for traditional sign language recognition methods employing such means as MLP and CTC loss, the authors propose for different tasks still use different supervision, for example, words, glosses, and sentences, and why it is still referred to as a unified paradigm.\\n\\n2. In Fig. 5, why the feature information of the face has to be forwarded to the left Pose Encoder after it has been encoded by the Pose Encoder is not mentioned in the paper..\\n\\n3. In line 479 of the paper, the authors show a boost of 1.36 on BLEU-4, but the corresponding value is not found in Table 9.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper propose Uni-Sign, a unified pre-training framework that eliminates the gap between pre-training and downstream SLU tasks through a large-scale generative pre-training strategy and a novel fine-tuning paradigm. It also introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing1,985 hours of videos paired with textual annotations.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"Pros:\\n1. This work proposes a unified framework to conduct pretraining and finetuning, which demostrates novelty.\\n2. This work shows promising performance across a wide range of benchmarks.\\n3. The paper is easy to understand.\", \"weaknesses\": \"Questions and cons:\\n1. During the data curation process, the authors use a ASR toolkit (FunASR) to convert the speech into texts as labels. There exist some problems. First, as the speech signal has time delay with the sign language expressed by the signer, how to assure that the temporally cropped clips are exactly aligned with the transcribed texts? Second, the authors have stated the averaged length of words are 40 and the averaged length of clips are 9.5s. It's very hard to express 40 words within 9.5s for a signer. Thus, it's most probably that the signer has neglected some meanings in the sentence, and only expressed a part of mearnings in the signs. In this condition, the signs are probably not aligned with the transcribed texts. Third, i observed that in the paper, the authors don't organize a double-check process for the cropped videos from the TV shows to check the alignment between texts and clips, the correctness of transcribed texts, the correctness of transcribed signs and other things. Thus, how to assure the compleness and correctness of the curated datasets?\\n2. During the experiments for CSLR, PHOEXNI14 and PHOENIX14-T are also broadly used datasets. Why not report the results on these datasets? It's due to the language gap between pretraining data and downstream data? How about the performance on these two datasets?\\n3. In table 3 and table 5, some other numbers are bolded except the results reported by the proposed method. The authors may clarify on this or use another way to emphsize the results.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a unified pre-training framework for Sign Language Understanding (SLU) that bridges the gap between pre-training and downstream tasks, including isolated sign language recognition (ISLR), continuous sign language recognition (CSLR), and sign language translation (SLT). This is achieved through a large-scale generative pre-training strategy and a unified fine-tuning paradigm. The key strengths of the paper include its innovative unified framework that addresses knowledge transfer challenges in SLU, the introduction of CSL-News, which is significantly larger and more diverse than existing datasets (1,985 hours of Chinese Sign Language videos paired with textual annotations), and the inclusion of PGF and score-aware sampling, which improve multi-modal learning efficiency and accuracy. The comprehensive evaluation on various SLU benchmarks further demonstrates the robustness and effectiveness of the proposed approach. All reviewers provided positive feedback, with recommendations of \\\"accept\\\" or \\\"marginally above the acceptance threshold.\\\"\", \"additional_comments_on_reviewer_discussion\": \"Several key concerns were raised by the reviewers, including: 1) a lack of sufficient clarity and detail in explaining the score-aware sampling strategy, 2) the absence of a specification for the number of parameters in the model, 3) the results of using only RGB video, 4) a lack of discussion on computational complexity, 5) the absence of an analysis of error patterns, and 6) uncertainty regarding how the temporally cropped clips are aligned exactly with the transcribed texts. In the rebuttal, the authors comprehensively addressed each of these points and effectively responded to all major comments. As a result, all reviewers are now positive about the paper.\"}" ] }
0Xc6o1HKXD
Multi-Perspective Test-Time Prompt Tuning for Global, Local Visuals, and Language
[ "Zhaohong Huang", "Yuxin Zhang", "JingJing Xie", "Fei Chao", "Rongrong Ji" ]
Recent advances in vision-language models (VLMs) have demonstrated significant generalization across a broad range of tasks through prompt learning. However, bridging the distribution shift between training and test data remains a significant challenge. Existing researches utilize multiple augmented views of test samples for zero-shot adaptation. While effective, these approaches focus solely on global visual information, neglecting the local contextual details of test images. Moreover, simplistic, single-form textual descriptions limit the understanding of visual concepts, hindering the transfer performance of classes with similar or complex visual features. In this paper, we propose a Multi-Perspective Test-Time Prompt Tuning method, MP-TPT, building on two key insights: local visual perception and class-specific description augmentation. Specifically, we introduce local visual representations from VLMs during the optimization process to enhance the prompts' ability to perceive local context. On the other hand, we design a data augmentation method at the text feature level that imparts regional visual priors to specific class texts, thereby enriching the class-specific descriptions. Furthermore, we synchronize the multi-view concept during the inference, integrating both local and global visual representations with text features for a deeper understanding of visual concepts. Through extensive experiments across 15 benchmark datasets, we demonstrate the advantages of MP-TPT, particularly achieving a 1% improvement in state-of-the-art TPT accuracy in cross-dataset settings, along with 4.5 times acceleration in inference speed.
[ "Prompt Learning", "Test Time Adaption", "Vision-Language Models" ]
https://openreview.net/pdf?id=0Xc6o1HKXD
https://openreview.net/forum?id=0Xc6o1HKXD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "m6B7qVv5JQ", "gODmZdqH6t", "atBDoIraNB", "V2fWiURbmE" ], "note_type": [ "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730597898815, 1732167150025, 1730721775608, 1730447611675 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7493/Reviewer_Y52q" ], [ "ICLR.cc/2025/Conference/Submission7493/Authors" ], [ "ICLR.cc/2025/Conference/Submission7493/Reviewer_7h8V" ], [ "ICLR.cc/2025/Conference/Submission7493/Reviewer_yWFV" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes utilizing local visual context and class-specific text description augmentation to improve the classification accuracy of the test-time prompt tuning of CLIP model. The local visual representation is obtained by projecting the entire visual feature to the region level and calculating the similarity with text features. The top-K high-similarity region features are selected to produce the class-specific descriptions. The prompts and the global-local visual features are further aligned through a dual interaction during the tuning phase. Experiments show some improvement.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The motivation is clear. Existing test prompt tuning methods focus only on global visual feature augmentation, neglecting the importance of local context in images. By introducing fine-grained local visual features and their corresponding text prompt descriptions, the proposed method should contribute to improved test-time prompt tuning results. The paper is easy to understand.\", \"weaknesses\": \"1. The main weakness of this paper is that the experimental results are marginal. From Table 1, we can see that the best result of the proposed MP-TPT (65.66) is only 0.2% better than the baseline DiffTPT (65.47). Similarly, in Table 2, the MP-TPT method also shows a marginal improvement (less than 0.5%). Did the authors conduct statistical significance tests to verify the effectiveness of the proposed method? These minor differences may also stem from the randomness of the training process. Providing error bars or standard deviations would make the results more convincing. Furthermore, does the method work beyond the CoOp framework, such as on Maple[1] and PromptSRC[2]?\\n\\n[1] MaPLe: Multi-modal Prompt Learning\\n[2] Self-regulating Prompts: Foundational Model Adaptation without Forgetting\", \"questions\": \"1. In L107, how can the method enhance inference efficiency when it requires multi-perspective views, which will obviously increase computational and storage costs? Additionally, Table 1 shows that MP-TPT-S has a lower inference time than TPT. What are the different experimental settings between these two methods, and is the comparison fair? Could the authors provide a more detailed analysis of computational complexity and memory usage?\\n\\n2. The description in Section 3.2.3 is difficult to understand. What is the difference between test time tuning and test time inference? How to generate $\\\\boldsymbol{f}^{t *}$ and $\\\\hat{\\\\boldsymbol{f}}^{t *}$? Additionally, Figure 2c is confusing; how is Eq. 12 applied in Figure 2c, e.g, where is the $\\\\lambda$ in Figure 2c?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper studies the topic of test-time prompt tuning (TPT), and propose MP-TPT. MP-TPT introduces local patch features as additional visual augmentations, which may be crucial for classification. Additionally, it leverages local visual features to enhance text feature descriptions. Extensive experiments demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The motivation to use local features is intuitive, as they may contain important details that can enhance model performance.\\n\\n2. The paper conducts extensive experiments, including comparisons of MP-TPT on two representative benchmarks and ablation studies.\", \"weaknesses\": \"1. **Limited novelty and contribution**: The concept of using local features to enrich image features and image counterparts to enhance text features has already been proposed in [1]. The primary difference in this paper is the implementation of this idea in a test-time prompt tuning scenario. Surprisingly, [1] is not cited or discussed in this paper.\\n\\n2. **Clarity and organization**: The paper is difficult to follow due to disorganized writing, confusing formulas, and figures and tables that are not self-contained. This impacts its suitability for ICLR acceptance. I list some below:\\n\\n 1. Line 225: The term \\\"local visual representation\\\" is unclear. Is this referring to CLIP patch features? This needs clarification.\\n 2. Line 253: Why are classification probabilities referred to as \\\"cross-modal information\\\"? Is it simply because they use features from two modalities? What specific information do they contain?\\n 3. In Equation (7), The resulting shape is $\\\\mathbb{R}^{W H \\\\times d}$. How are $M$ augmented features derived?\\n 4. In Equation (7), There are 5 left brackets and 3 right brackets, making the expression difficult to understand.\\n 5. In Table 1, how is the inference time calculated? Are the times in seconds? Different datasets with varying classes should have different inference speeds. The table should be self-contained.\\n 6. Multiple definitions of $K$: In Line 161, $K$ is defined as the number of classes, while in Line 244 and Equation (6), $K$ is the number of selected regions.\\n 7. Undefined terms: $\\\\boldsymbol{f}^t$ in Equation (7) is not defined. Is it a set or a concatenation of $\\\\boldsymbol{f}^t_i$?\\n 8. The definition of a set in Equation (8) is incorrect. The part \\u201c$p\\\\left(y_k \\\\mid \\\\tilde{\\\\boldsymbol{f}}_i^t\\\\right)$\\u201d after the colon should be removed.\\n\\n3. **Experimental issues**: \\n\\n 1. The claims in Line 28 are misleading. MP-TPT did not achieve a 1% improvement over TPT and 4.5 times faster simultaneously. These are achieved by different methods, MP-TPT-L and MP-TPT-S.\\n\\n 2. Some highly relevant works, such as [2] and [3], are missing from Tables 1 and 2. The performance of MP-TPT is significantly lower compared to these methods. More discussion is needed.\\n\\n | Methods | Cross-dataset | Domain Generalization |\\n | --------------- | ------------- | --------------------- |\\n | PromptAlign [2] | 66.92 | 63.55 |\\n | TDA [3] | 67.53 | 63.89 |\\n | MP-TPT-L | 65.66 | 62.35 |\\n\\n 3. The ablation study is unconvincing. Why are results provided only on 5 datasets? The proposed methods can lead to performance degradation in many cases, such as in the Flowers102 and Caltech101 datasets. The average performance gain seems to stem from the EuroSAT dataset, which only contains 10 classes and is sensitive.\\n\\n4. **Effectiveness of design**: The use of random masks on local features as a proxy for random cropping is questionable. I explored this idea in test-time prompt tuning tasks a year ago and found it ineffective, raising concerns about its effectiveness in MP-TPT.\\n\\n5. **Lack of error bar analysis**: The paper does not include an error bar analysis, which is an important aspect of experimental evaluation.\\n\\n[1] Task-Oriented Multi-Modal Mutual Learning for Vision-Language Models. ICCV 2023.\\n\\n[2] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization. NeurIPS 2023.\\n\\n[3] Efficient Test-Time Adaptation of Vision-Language Models. CVPR 2024.\", \"questions\": \"Intuitively, local patch features do not align with text features and therefore cannot be directly utilized, as studied in [1]. Could the authors provide more discussion or visualizations to illustrate this aspect?\\n\\n[1] A Closer Look at the Explainability of Contrastive Language-Image Pre-training.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a novel method called Multi-Perspective Test-Time Prompt Tuning (MP-TPT) designed to enhance vision-language models (VLMs) during test time. Unlike prior approaches that focus solely on global visual features, MP-TPT combines global and local visual information with language prompts, offering a comprehensive view during test-time adaptation. The method enhances textual prompts with class-specific descriptions by using local visual information, which allows the model to capture diverse contextual variations. Extensive experiments across multiple benchmarks demonstrate that MP-TPT achieves notable improvements in accuracy and inference speed compared to state-of-the-art methods, particularly in zero-shot and cross-dataset generalization scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. MPTPT addresses a critical limitation in existing methods that rely solely on global visual features. By incorporating class-specific, region-based prompts, the paper proposes an innovative way to adapt VLMs to unseen data without retraining, which is both effective and practical.\\n 2. The methodology is rigorous, with extensive experiments on 15 benchmark datasets that demonstrate the model's adaptability and efficiency, especially in zero-shot and cross-dataset settings. Ablation studies add further credibility by detailing each component's contribution.\", \"weaknesses\": \"1. Limited improvement over global feature methods: results indicate that the performance gains of MP-TPT over other methods focusing on global visual features, such as DiffTPT, are not substantial, which raises questions about the effectiveness of incorporating local visuals.\\n2. The paper does not sufficiently clarify the interaction between local visual features and text descriptions. A more detailed explanation of how these components integrate during optimization and inference would enhance understanding.\\n3. While MP-TPT introduces local visual information to improve class-specific descriptions, the paper could benefit from a deeper analysis of how these local augmentations influence specific categories, particularly when handling complex classes.\", \"questions\": \"1. It would strengthen your claims to include more comprehensive comparisons with a broader range of state-of-the-art methods in your experiments. Highlighting specific scenarios where MP-TPT excels or falls short could provide valuable insights.\\n2. Can you clarify the specific roles that global, local, and language perspectives play in test-time prompt tuning? In particular, how do local and language perspectives interact, considering their apparent strong coupling.\\n3. Could you provide more experiment on MPTPT+CoOP/MaPLE or other prompt tuning method in Base-to-Novel Generalization? It will help to prove MPTPT\\u2019s effectiveness as a plug-to-play prompt learning mthod. \\n4. Providing detailed ablation studies that analyze trade-off between speed, accuracy and amount of parameters would enhance the understanding of the practical implications of MP-TPT.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0XT3Lg6S2Q
Efficient Adaptive Filtering for Deformable Image registration
[ "Renjiu Hu", "Xiang Chen", "Jiacheng Wang", "Gaolei Li", "Noel C Codella", "Hang Zhang" ]
In medical image registration, where targets exhibit piecewise smooth structures, a carefully designed low-resolution data structure can effectively approximate full-resolution deformation fields with minimal accuracy loss. Although this physical prior has proven effective in traditional registration algorithms, it remains underexplored in current learning-based registration literature. In this paper, we propose AdaWarp, a novel neural network module that leverages this prior for efficient and accurate medical image registration. AdaWarp comprises an encoder, a guidance map generator, and a differentiable bilateral grid, enabling an edge-preserving low-frequency approximation of the deformation field. This design reduces computational complexity with low-resolution feature maps while increasing the effective receptive field, achieving a balanced trade-off between registration accuracy and efficiency. Experiments on two registration datasets covering different modalities and input constraints demonstrate that AdaWarp outperforms existing methods in accuracy-efficiency and accuracy-smoothness tradeoffs.
[ "Deformable image registration", "Adaptive filtering", "Bilateral Grid", "Piece-wise Smooth" ]
Reject
https://openreview.net/pdf?id=0XT3Lg6S2Q
https://openreview.net/forum?id=0XT3Lg6S2Q
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wNcR9gXElh", "wGsBztYK9k", "vw0f7zwWYm", "vXsFHKkQGD", "tKkoOqNvE7", "qklnn3lgXD", "oR5kjPhNWE", "mBPRZ6dvCl", "m23qRr58Ti", "lIyOgaOdQG", "j2TMr7jwAp", "if5tjk9RD7", "gtDZvlih2S", "fAsGqyWFP1", "eXqhJAo2hQ", "e2R94beXWe", "dmVp9KuF5n", "ZI5iIhTAdE", "ZEXowxnKDm", "YBkJfYQN2e", "X7PcjJApAX", "VhHEWKGLDS", "VTZKGDrv2c", "VEAYHlw5b5", "TdlXJ1Zdbe", "OpQoyZeh27", "OjZWUoEMn9", "OfMncqtFYZ", "M1cNs3aikN", "KM1FoP299w", "HjTdR5bS53", "H1K4COjXAM", "G8wNE9N1PI", "Fk7gAYCRVE", "DSiFSiPdH4", "CdUpZWKlu5", "BWAN2sTqgK", "6xZ7ajtElR", "64JJwHjRqu", "61jCT8gOy1", "5EZoduiiQD", "0coVfVnV7c", "0RRQm8dZL1", "0IkanmQaXK" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732587617028, 1732509610107, 1733024088931, 1732961206089, 1730487861452, 1732959457956, 1732450396693, 1733121465573, 1732869852214, 1732568206529, 1733026375655, 1733125006340, 1733123084874, 1732517981468, 1732867481722, 1733130001145, 1732263290954, 1733132969444, 1732566098324, 1732531913501, 1732916703887, 1732865006536, 1732953404716, 1732432044021, 1733008824058, 1730299388029, 1732565569214, 1732565106138, 1737524270415, 1732949294621, 1732962019764, 1733132172129, 1729716701188, 1732490845172, 1733128643731, 1732566616515, 1732868589972, 1733124817452, 1734586655619, 1732588054598, 1733119356829, 1732263539504, 1730820521675, 1732958208392 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_RMNZ" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_MgEu" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_RMNZ" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_RMNZ" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_TEi6" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_UMDT" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_RMNZ" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_RMNZ" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_RMNZ" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_UMDT" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_UMDT" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_UMDT" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_RMNZ" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_MgEu" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_UMDT" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_UMDT" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Area_Chair_Ai5N" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ], [ "ICLR.cc/2025/Conference/Submission13590/Reviewer_TEi6" ], [ "ICLR.cc/2025/Conference/Submission13590/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reply to 'Final remarks'\", \"comment\": \"Dear Reviewer UMDT,\\n\\nThank you for your critiques. We would like to provide a gentle reminder of the ICLR submission-review procedure and timeline, as mentioned in the email from the ICLR program chairs.\\n\\n**Resubmission:** \\nUnlike conferences like CVPR, ICLR allows revised submissions that address reviewer comments or include improvements. Therefore, we believe we can incorporate the feedback and reorganize the manuscript for greater clarity.\\n\\n**Timeline:** \\nWe are still in the discussion phase and have not yet responded to all your comments or uploaded our revised manuscript. The timeline is as follows: \\n- **November 27th**: Last day to upload a revised PDF. After this date, only replies on the forum are allowed (no manuscript changes). \\n- **December 2nd**: Last day for reviewers to post messages to authors (six-day extension). \\n- **December 3rd**: Last day for authors to post messages on the forum (six-day extension). \\n\\n> *\\\"Many of the responses of the authors can be incorporated into the methods and results sections. Due to the amount of changes and additions, I believe that the paper would require another round of reviews to assess it.\\\"*\\n\\nAs we are working on the revised manuscript, we believe that if your concerns are addressed in the comments, the final version will reflect the necessary changes and additions.\"}", "{\"title\": \"Part 1: Addressing [W1] and [Q1]\", \"comment\": \"Dear Reviewer RMNZ,\\n\\nWe sincerely appreciate your comments and suggestions. Below, we address your concerns listed in the weakness section as [W1], [W2], and [W3], along with your questions. Note that some questions overlap with the weaknesses, so we have combined them in our responses.\\n\\n[W1] Reply: Thank you for raising this point. We have added more references to the literature review in the revised manuscript and included two recent multi-scale approaches: CorrMLP [1], also mentioned by Reviewer MgEu, and RDP [2]. Both methods demonstrate improved performance compared to existing baselines in Tables 1 and 2. Additionally, we note that several existing baselines, such as MemWarp, LapIRN, and discrete optimization-based methods, also employ image pyramids for multi-scale processing.\", \"below_are_the_results_of_the_added_methods_compared_to_ours_for_reference\": \"**Cardiac Dataset:**\\n| Model | Avg. (%) | RV (%) | LVM (%) | LVBP (%) | HD95 (mm) \\u2193 | SDlogJ \\u2193 |\\n|----------------|----------|--------|---------|----------|-------------|-----------|\\n| Initial | 58.14 | 64.50 | 48.33 | 61.60 | 11.95 | - |\\n| CorrMLP | 77.58 | 74.84 | 75.68 | 82.21 | 9.23 | 0.052 |\\n| RDP | 77.62 | 74.70 | 75.95 | 82.20 | 9.15 | 0.053 |\\n| Ada-Res (Ours) | 79.20 | 78.14 | 76.31 | 83.15 | 8.33 | 0.050 |\\n\\n\\n**Abdomen Dataset:**\\n| Model | Type | Dice (%) | HD95 (mm) \\u2193 | SDlogJ \\u2193 |\\n|-------------------|-------|----------|-------------|-----------|\\n| Initial | - | 30.86 | 29.77 | - |\\n| CorrMLP | L | 56.58 | 20.40 | 0.16 |\\n| RDP | L | 58.77 | 20.07 | 0.22 |\\n| Ada-Cost (Ours) | L&D | 62.74 | 15.03 | 0.12 |\\n\\n\\n[Q1] Reply: Thank you for raising this point. Initially, we developed Ada-Res for the ACDC dataset to achieve strong performance. However, applying Ada-Res to the abdomen dataset revealed suboptimal performance due to inherent challenges like the aperture and large displacement problems.\\nTo address these, we researched solutions tailored to abdomen datasets and found that incorporating image pyramids and discrete optimization was more effective. Leveraging the flexibility of AdaWarp, we integrated these approaches into Ada-Cost.\\nTo verify generalizability, we tested Ada-Cost on the ACDC dataset and observed improved Dice scores but slightly degraded HD95 compared to Ada-Res. We have updated the manuscript to reflect Ada-Cost results on ACDC and ensure consistency in model structure across datasets.\", \"below_are_the_results_of_the_ada_cost_applied_to_acdc_dataset_in_comparison_to_initial_ada_res\": \"**Cardiac Dataset:**\\n| Model | Avg. (%) | RV (%) | LVM (%) | LVBP (%) | HD95 (mm) \\u2193 | SDlogJ \\u2193 |\\n|------------------|----------|--------|---------|----------|-------------|-----------|\\n| Initial | 58.14 | 64.50 | 48.33 | 61.60 | 11.95 | - |\\n| Ada-Res (Ours) | 79.20 | 78.14 | 76.31 | 83.15 | 8.33 | 0.050 |\\n| Ada-Cost (Ours) | 79.82 | 77.58 | 77.95 | 83.92 | 8.98 | 0.050 |\\n\\nIn addition, we would like to clarify that while many studies use the same backbone network across datasets to demonstrate generalizability, **AdaWarp is a flexible network module** rather than a full backbone that can be easily integrated into existing frameworks. \\n\\n- For instance, we incorporated AdaWarp into ConvexAdam as a bilateral filter (Table 2). Although the improvement is limited due to the non-learnable nature of ConvexAdam, this demonstrates AdaWarp's flexibility. \\n\\n- As discussed in the manuscript, AdaWarp can also be integrated into other registration frameworks, such as **zero-shot lung CT registration** based on keypoints and image intensities, with preliminary results shown in Table 3. Furthermore, AdaWarp can be applied to **medical image segmentation** tasks, with preliminary results provided in Table 4.\\n\\n[1] Meng, M., Feng, D., Bi, L. and Kim, J., 2024. Correlation-aware Coarse-to-fine MLPs for Deformable Medical Image Registration. CVPR 2024.\\n\\n[2] Wang, H., Ni, D. and Wang, Y., 2024. Recursive deformable pyramid network for unsupervised medical image registration. TMI 2024.\"}", "{\"comment\": [\"Dear Reviewer RMNZ,\", \"Thank you for your thoughtful consideration of our detailed responses. We greatly appreciate the time and effort you have dedicated to reviewing our work and revisiting your assessment. While we are grateful for the score increase, we would like to further emphasize how our work contributes to and benefits both the medical imaging and learning representation community, in the hope that you might consider **an even higher score**.\", \"1. **Advancing the Field of Medical Image Registration**:\", \"**Addressing a Critical Need**: Our work fills a significant gap in medical image registration by seamlessly incorporating the piece-wise smooth (P-S) prior into an end-to-end trainable framework. This advancement not only enriches the theoretical foundations of registration algorithms but also provides a practical tool that the community can leverage for more accurate and reliable image alignment.\", \"**Methodological Innovation with Broad Applicability**: Beyond registration, our method builds up connetctions of learnable adaptive filtering with self-attention and gated attention mechanism. We believe this approach has the potential to serve as **a building block for next-generation neural networks**. Its adaptability and efficiency make it suitable for various image processing tasks, potentially inspiring new research directions and applications beyond the medical imaging community.\", \"2. **Empowering the Community through Empirical Validation**:\", \"**Setting a New Performance Benchmark**: Our results demonstrate that Ada-Cost achieves state-of-the-art performance, both in quantitative metrics and in producing realistic displacement fields. By aligning with realistic cardiac motion, as shown in Figure 10, our method provides a reliable benchmark that others in the community can build upon.\", \"**Simplicity Enhancing Accessibility**: Despite using a very simple architecture consisting of only a handful of convolutional layers, Ada-Cost surpasses much more complex models. Its computational efficiency and ease of implementation make it accessible for researchers and practitioners, facilitating wider adoption and fostering further innovation.\", \"**Handling Complex Clinical Scenarios**: The ability of Ada-Cost to preserve local discontinuities, critical in datasets with sharp boundaries and sliding motions like the abdomen dataset, addresses significant challenges in medical imaging. This capability can improve the accuracy of diagnoses and interventions, directly benefiting clinical outcomes.\", \"3. **Catalyzing Future Research and Clinical Applications**:\", \"**Foundation for Advanced Models**: By establishing a stronger baseline for learning-based registration models, Ada-Cost serves as a foundational building block for developing more powerful and advanced image registration frameworks.\", \"**Extending Impact Beyond Registration**: We have demonstrated preliminary results extending Ada-Cost to other challenging tasks, such as keypoint-based lung motion estimation and medical image segmentation. Additionally, our preliminary derivation on connections of adaptive filtering with self-attention showcases the method's potential to contribute to the development of **next-generation neural networks**. These extensions highlight the method's adaptability and its potential to address diverse challenges in medical imaging, ultimately benefiting patient care.\", \"4. **Collaborative Engagement and Community Benefit**:\", \"**Open Dialogue and Transparency**: Throughout this discussion, we have engaged thoroughly and openly with your concerns, fostering a constructive dialogue that we believe strengthens the work and its presentation.\", \"**Enhancing Clarity for Community Adoption**: We have updated the manuscript to improve clarity and ensure that all key claims are well-supported by evidence, both qualitatively and quantitatively. By doing so, we aim to make our work more accessible and beneficial to the community, encouraging others to build upon our findings.\", \"We sincerely thank you for already increasing your score, which reflects your recognition of the novelty, simplicity, and practical impact of our work. Given its potential to serve as a foundation for further advancements and its broader applicability to critical tasks in medical imaging, we kindly ask you to consider these points for further raising your score, as we believe our contributions make a substantial impact on the community.\", \"Thank you once again for your thoughtful feedback and engagement with our work.\"]}", "{\"comment\": \"I apologize if my previous comments misused the term \\\"guarantee\\\"\\u2014that was not my intention. If I understand correctly, while applying the P-S assumption may help achieve clearer boundaries in the warped image, relying on it alone is insufficient. Instead, your contribution focuses on preserving displacement field discontinuities, and your evaluation is centered at that level, rather than extending directly to boundary clarity in the image domain.\"}", "{\"summary\": \"This paper proposes a learning framework that improves the accuracy-efficiency trade-off in medical image registration by leveraging the piece-wise smooth prior. The proposed method was evaluated on two medical image datasets involving cardiac MRI and abdomen CT images. This method transforms the deformable registration problem into a keypoint detection task and shows potential for segmentation tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed method bridges the gap in the existing literature focusing on the balance between registration accuracy and computational efficiency, which is capable of enforcing global smoothness while respecting local discontinuities. This paper was well-written with very clear description on methodology.\", \"weaknesses\": \"1. The major concern is the research focus of this study, which might not be of sufficient significance in the field of medical image registration. After the introduction of deep learning-based registration methods, e.g., VoxelMorph, existing methods have been very fast in registration, allowing real-time registration using GPUs. Under this situation, only a few studies have specifically focused on improving efficiency, which suggests that this topic might not be the real problem in the community.\\n2. Another concern is the generalizability of the P-S assumption. In the study, this assumption was exemplified and evaluated with cardiac MRI and abdomen CT images, where there is no too many complex anatomical structures and local deformations. It\\u2019s important to evaluate the proposed method on the well-benchmarked brain MRI registration tasks, in which the P-S assumption may fail.\", \"questions\": \"1. In Figure 4 and Figure 5, why not include VoxelMorph into comparison? VoxelMorph is the most widely-benchmarked method and has high efficiency with low number of parameters.\\n2. There is a recent registration study in CVPR (CorrMLP, Meng et al. 2024), which is based on a totally conflicting motivation against this paper. CorrMLP attempted to capture long-range dependency among full-resolution image details in an efficient approach (using MLPs), while this paper suggests that only low-resolution features are sufficient. So, it\\u2019s interesting to compare with the CorrMLP: did the proposed method achieve similar registration accuracy while reducing much computational complexity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors (cont'd)\", \"comment\": \"Please let us know if you do not agree with the above basic logic. Below, we address your detailed points:\\n\\n1. **The Discontinuities Produced by Ada-Cost:** \\n Figure 4 in the revised manuscript clearly demonstrates that Ada-Cost produces discontinuities that other methods fail to capture, covering both types of discontinuities outlined in the **Illustration of Displacement Field Discontinuities**. \\n\\n To assist in visualization, we have marked these discontinuities in the figure via this [link](https://ibb.co/GP9G25s). Type 2 discontinuities occur less frequently than type 3. Specifically:\\n - The region in the **red box** contains type 2 discontinuities.\\n - The region in the **green box** contains type 3 discontinuities.\\n\\nIn these marked regions, all other methods fail to capture such discontinuities, further highlighting Ada-Cost's ability to preserve local discontinuities **effectively, though not perfectly**.\\n\\n2. **Our Observations/Prior Knowledge**: \\n - The following statement reflects an observation based on medical imaging: \\u201cDistinct boundaries often exist between organs and the background or neighboring organs... while clear and well-defined boundaries are formed by intensity differences between these regions (see Fig. 1, columns 1&2). These consistent intra-region smoothness and inter-region boundaries indicate that certain medical images exhibit piece-wise smooth structures.\\u201d \\n - **Please distinguish observation vs. technical innovation claim**. This observation is based on what our clinicians commonly encounter in daily medical scans and does not imply that our method can guarantee sharp boundaries. Achieving guaranteed **sharp boundaries whenever necessary**, would indeed be extremely challenging and represent a big breakthrough, which we do not claim in this work.\\n\\n3. ** P-S Assumption and Sharp Boundaries**: \\n\\u201cIf the P-S assumption inherently does not guarantee sharp boundaries, why are boundaries a key component of your prior?\\u201d We believe this reflects a misunderstanding of basic logic rather than an issue with the proposed method. Allow us to illustrate this with an analogy: \\n **If the ICLR conference inherently does not guarantee paper acceptance, why is your paper still one of the submissions to this venue?** \\n In essence, by incorporating the prior, our model has a **better capability** of preserving sharp boundaries compared to other learning-based methods. However, this **does not imply that it can achieve 100% success** in capturing all sharp boundaries. The difficulty of the abdomen dataset, as indicated by Dice scores below 70% across all methods, demonstrates that certain correct displacements are inherently missed. \\n\\n4. \\u201cTo address these discontinuities, some works have employed bilateral filters [27, 28], which preserve edges and improve registration performance in the presence of local discontinuities.\\u201d \\n We do not see any inconsistencies between these prior approaches and ours, though we agree the sentence could be clarified for better readability.\", \"revised_sentence\": \"\\u201cTo address these discontinuities, some works have employed bilateral filters [27, 28], which **help** preserve edges and improve registration performance in the presence of local discontinuities.\\u201d \\n\\n We believe the aim of our proposed method overlaps with these prior approaches, as we extend the neural network\\u2019s ability to **help** preserve edges or sharp boundaries, thereby improving registration performance. However, our method goes beyond traditional approaches, as our filters to the bilateral grid are end-to-end trainable and learnable. The benefits of using learnable adaptive filters are demonstrated in Table 2. For example, while ConvexAdam applies traditional bilateral filters and shows a slight increase in Dice, the improvement is marginal compared to the gains achieved by Ada-Cost.\\n\\n5. \\u201cIf your method is designed to preserve local discontinuities, how can it fail to guarantee sharp boundaries?\\u201d \\n This reflects a logical misunderstanding rather than an issue with the proposed method. Allow us to provide an analogy: \\n **If your paper is written to get accepted by ICLR, how can it fail to guarantee paper acceptance?** \\n\\n Similarly, while our method is designed to preserve local discontinuities and improve sharp boundary preservation compared to other learning-based methods, it does not guarantee sharp boundaries in every case. This is due to the inherent complexity and challenges of the datasets, such as the abdomen dataset, where no method achieves perfect performance.\\n\\nIn conclusion, we understand that some details of the paper may be challenging to fully grasp, and we are committed to addressing every single one of your concerns to ensure clarity and a better understanding.\"}", "{\"title\": \"Part 2: Addressing Questions\", \"comment\": \"[Q1] Reply: We agree that VoxelMorph is highly efficient with a low parameter count, and we did not exclude it intentionally from Figs. 5 and 6. The main reason for its exclusion is that VoxelMorph is not as competitive as other methods, and adding it would further crowd the figures. Here are its relevant statistics:\\n1. **Figure 5:** VoxelMorph would be positioned within the polygon formed by the lines of FourierNet and TransMorph, with Dice: 76.35%, Multi-Adds: 19.5 G, and Parameter Size: 0.32 MB. \\n2. **Figure 6:** VoxelMorph would appear near TransMorph, slightly to its upper left, with Dice: 47.05%, Multi-Adds: 73.30 G, and Parameter Size: 0.32 MB.\\n\\n[Q2] Reply: Thank you for mentioning CorrMLP. While CorrMLP has an interesting approach, we would like to clarify that **AdaWarp shares the same motivation as CorrMLP**. Although not explicitly stated in the manuscript, the low-resolution feature maps from deeper layers of neural networks inherently possess larger effective receptive fields (ERFs) than shallower layers. \\nPlease refer to the ERF visualization via [this link](https://ibb.co/BKtSJk2), also included in the revised manuscript appendix. Darker and more widely spread regions indicate larger ERFs. The details of ERF computation can be found in [1]. Key observations: \\n - **Leveraging Large Receptive Fields:** Subfigures (a), (b), and (c) illustrate feature maps from different encoder levels (L1: full resolution, L2: 1/2 downsampled, L3: 1/4 downsampled) from VoxelMorph. Deeper layers (e.g., L3) have larger ERFs, confirming that low-resolution features from deeper layers capture broader context. AdaWarp leverages the deepest encoder layer for the largest possible ERF. As shown in Table 4 (first row vs. last row), maintaining object boundaries (via AdaWarp) is essential for accuracy, as large ERFs alone are insufficient.\\n - **Effectiveness of AdaWarp Over Swin-Unet:** We compared ERF heatmaps of Swin-Unet and Ada-Swin (pre-softmax feature maps of models used in Table 4). Both share identical encoders, differing only in the decoder (Swin-Unet uses a U-Net structure, while Ada-Swin uses AdaWarp). Ada-Swin shows larger ERF regions and achieves 1.89% higher accuracy than Swin-Unet.\\n\\nAs for the results of CorrMLP, we used the source code from their public repositories and ran experiments on the cardiac and abdominal datasets. The training settings were identical to those used for AdaWarp, including the same regularization strength and squaring-and-scaling integration with 7 steps. We have updated Tables 1 and 2 in the revised manuscript to reflect these results, and we briefly list them here for comparison.\\n\\n**Cardiac Dataset:**\\n| Model | Avg. (%) | RV (%) | LVM (%) | LVBP (%) | HD95 (mm) \\u2193 | SDlogJ \\u2193 |\\n|----------------|----------|--------|---------|----------|-------------|-----------|\\n| Initial | 58.14 | 64.50 | 48.33 | 61.60 | 11.95 | - |\\n| CorrMLP | 77.58 | 74.84 | 75.68 | 82.21 | 9.23 | 0.052 |\\n| Ada-Res (Ours) | 79.20 | 78.14 | 76.31 | 83.15 | 8.33 | 0.050 |\\n\\n\\n**Abdomen Dataset:**\\n| Model | Type | Dice (%) | HD95 (mm) \\u2193 | SDlogJ \\u2193 |\\n|-------------------|-------|----------|-------------|-----------|\\n| Initial | - | 30.86 | 29.77 | - |\\n| CorrMLP | L | 56.58 | 20.40 | 0.16 |\\n| Ada-Cost (Ours) | L&D | 62.74 | 15.03 | 0.12 |\\n\\nFrom the results, CorrMLP proves to be a highly competitive technique, outperforming all other baselines in registration accuracy while maintaining competitive smoothness of the deformation field. However, we observed a discrepancy between our reproduced results and those reported in Table 2 of the CorrMLP manuscript [2]. While we used the same data split for training and testing, they reported an Avg. Dice of 81.0%, whereas ours was 79.2%. \\n\\nUpon further investigation, we found this discrepancy may stem from differences in image preprocessing. CorrMLP resampled images to a voxel size of 1.5x1.5x3.15 mm\\u00b3 and cropped to 128x128x32, while we resampled to 1.8x1.8x10 mm\\u00b3 and cropped to 128x128x16. The ACDC dataset protocol [3] specifies slice thicknesses ranging from 5 mm to 10 mm and spatial voxel sizes between 1.34x1.34 and 1.68x1.68 mm\\u00b2.\\n\\nWe are uncertain if CorrMLP's preprocessing aligns with standard practices, as upsampling from 5\\u201310 mm to 3.15 mm in the axial direction introduces redundant information, potentially inflating the Dice score.\\n\\n\\n[1] Luo, W., et al. Understanding the effective receptive field in deep convolutional neural networks. NeurIPS 2016.\\n\\n[2] Meng, M., et al. Correlation-aware Coarse-to-fine MLPs for Deformable Medical Image Registration. CVPR 2024. \\n\\n[3] Bernard, O., et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved?. TMI 2018.\"}", "{\"comment\": \"Dear Reviewer TEi6,\\n\\nWe hope this message finds you well. As the deadline for the discussion period approaches, we would like to kindly remind you that we **have not yet received any follow-up feedback from you**. To make progress, we would appreciate it if you could clarify **a few points** from your previous review. \\n\\n>*\\\"While the method leverages an encoder to extract a latent representation that approximates the deformation field at a low resolution, this approach mainly contributes to the model's efficiency but is not unique.\\\"* \\n\\nTo make this critique more actionable, could you **suggest specific approaches that improve efficiency in image registration** and provide a comparison of their **pros and cons** relative to the proposed method? This would help us better contextualize and address your concern. \\n\\n>*\\\"The use of latent feature representations for similar tasks has already become common in the field.\\\"* \\n\\nWe would like to clarify that our paper **does not claim the use of latent feature representations as a novelty or contribution**. This point is merely a common component in modern neural network architectures. \\n\\n>*\\\"The core of AdaWarp is a differentiable bilateral grid, which naturally incorporates the P-S prior. In implementation, the guidance map aids in processes like splatting, blurring, and slicing. This incremental modification lacks sufficient novelty.\\\"* \\n\\nCould you elaborate on why this is considered an incremental modification? To substantiate your claim, we kindly ask you to address the following questions: \\n- Are there any existing methods that perform the same process as a fully differentiable bilateral grid? If so, can you **list them and show how** they compare to ours in terms of **novelty and functionality**? \\n- **Which part** specifically do you consider incremental? Could you **point out the subtle modifications** we have made that lead to this characterization? \\n\\nYour insights would be invaluable in helping us refine our work and better address the concerns raised. We sincerely appreciate your time and look forward to your response. \\n\\nBest, \\nThe Authors\"}", "{\"title\": \"Reply to Weakness Section\", \"comment\": \"Dear Reviewer TEi6,\\n\\nThank you for your critiques regarding the novelty of the manuscript. Please refer to the revised manuscript and the 2nd point in the **Author Response Summary** for detailed explanations of our claims of technical novelty. Specifically addressing your concerns:\\n\\n1. We agree that latent feature representations are a common technique used in most neural networks, and we did not claim this as novel. However, the major novelty of our work lies in addressing a significant gap in the registration literature by seamlessly incorporating the physical prior, i.e., the piece-wise smooth assumption, into neural networks in an end-to-end trainable manner. To the best of our knowledge, no existing literature has achieved this.\\n\\n2. We understand that simple incremental modifications may lack sufficient novelty. While a differentiable bilateral grid may appear straightforward at first glance, there is no fully functional and trainable bilateral grid for learnable adaptive filtering in either the registration or natural image processing literature. As discussed in the revised manuscript and the 2nd point in the **Author Response Summary**, the closest methods, such as deep bilateral grids [1][2], use channel shuffling to inadequately represent the range dimension, resulting in suboptimal performance. Our AdaWarp addresses this limitation and holds broader implications for future applications in image processing.\\n\\n\\n[1] Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W. and Durand, F., 2017. Deep bilateral learning for real-time image enhancement. ACM Transactions on Graphics (TOG), 36(4), pp.1-12.\\n\\n[2] Xu, B., Xu, Y., Yang, X., Jia, W. and Guo, Y., 2021. Bilateral grid learning for stereo matching networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12497-12506).\"}", "{\"title\": \"Reply to author's comment\", \"comment\": \"Thank you for addressing many of the concerns raised in the initial review. I appreciate the effort the authors have put into improving the manuscript. While several issues have been resolved, I still find the paper's motivation (or \\\"story\\\") lacking experimental validation. As mentioned in Weakness 2 and Question 3, there remains insufficient evidence to demonstrate that the proposed method can achieve registration results with sharp boundaries(no visual results). This lack of solid experimental support undermines the alignment between the proposed method and the claimed contributions of the paper.\\n\\nTo clarify, I am not suggesting that the method itself lacks merit\\u2014it is indeed novel within the registration field and holds potential for future impact. However, the current experimental section of the paper does not convincingly support its claims, which limits its immediate contribution.\\n\\nAfter careful consideration, I have decided to maintain my score.\"}", "{\"title\": \"Reply to Final Review Comments\", \"comment\": \"Dear Reviewer MgEu,\\n\\nThank you for your thoughtful review and for acknowledging that our responses have largely addressed your concerns. Since the primary issues have been resolved, we would like to highlight the broader impact and contributions of our work, which we hope might encourage you to **increase the score**.\\n\\n1. **Interpretability and Simplicity**: \\n - Our method seamlessly incorporates a physical prior (piece-wise smoothness) into a neural network, enhancing interpretability while achieving strong performance. \\n - The architecture is **extremely simple**, making it easy to use and adapt for other researchers, while still setting a new baseline in learning-based registration.\\n\\n2. **Connecting Attention Mechanism and Adaptive Filtering**: \\n - By introducing learnable adaptive filtering, we have drawn initial connections between self-attention and learnable adaptive filtering, as well as between gated attention and learnable adaptive filtering. This potentially generalizes modern neural network architectures and positions learnable adaptive filtering as **a foundational block for the next generation of neural networks**. \\n\\n3. **Connecting Traditional Algorithms and Modern Neural Networks**: \\n - In addition, our method bridges the gap between traditional image processing methods, such as bilateral filtering, and modern neural network mechanisms. This fusion not only enriches the medical imaging field but also offers a pathway for integrating classical techniques into contemporary deep learning frameworks.\\n\\n4. **Broader Applicability**: \\n - Beyond the cardiac and abdomen datasets, we have demonstrated AdaWarp\\u2019s versatility with preliminary results on keypoint-based lung motion estimation and medical image segmentation. These results showcase its potential to address diverse and critical tasks for the community.\\n\\nGiven that our responses have addressed your concerns and our work contributes to the community through its interpretability, simplicity, and broader impact, we kindly request that you reconsider and further increase your score. We believe our study offers meaningful insights for researchers and practitioners alike. \\n\\nThank you again for your engagement and valuable feedback.\"}", "{\"comment\": \"Dear Reviewer RMNZ,\\n\\nThank you for taking the time to provide thoughtful feedback and for supporting the potential acceptance of our paper at ICLR 2025. We understand and respect your reasoning for maintaining your score. Your comments have been very helpful, and we\\u2019ll keep working to improve both the contributions and the writing in future work.\\n\\nBest, \\nThe Authors\"}", "{\"comment\": \"I have no objection to this paper being accepted at ICLR 2025. However, in my assessment, the paper\\u2019s current state falls between a 6 and a 7, but does not quite reach an 8 based on my standards. While the work shows potential, I feel that both the contribution and the writing do not meet the level of quality required for an 8.\\n\\nSince the scoring system does not allow a 7, I can only justify maintaining my score at a 6. I cannot raise my score simply because I wish for the paper to be accepted. Therefore, after careful consideration, I have decided to maintain my score.\"}", "{\"title\": \"Part 2: Addressing [W3] and [Q2]\", \"comment\": \"[W3] Reply: As stated in our response to Q1, AdaWarp is a plug-and-play module rather than a full backbone network, allowing flexibility in selecting the most suitable backbone based on dataset specifics to achieve higher accuracy. Contrary to the concern that using different network structures for different datasets might lead to inconsistencies or generalizability issues, we believe this demonstrates AdaWarp\\u2019s adaptability across various architectures and datasets with differing modalities and input constraints.\\n\\n**Network Architectures:** \\n- Both Ada-Res and Ada-Cost perform well on the ACDC dataset, as it is relatively simpler compared to the Abdomen dataset. While ACDC has local discontinuities and sliding motions, displacements are smaller (up to 15 voxels) than in the Abdomen dataset (up to 60 voxels). \\n- On the Abdomen dataset, Ada-Res achieves suboptimal performance due to the larger displacements. However, AdaWarp can be easily adapted and integrated into a more suitable backbone to optimize task-specific performance such as a combination of image pyramid and discrete optimization cocnept, highlighting its generalizability rather than inconsistency.\\n\\n**Input Constraints and Modalities:** \\n- The ACDC dataset is used for unsupervised learning, where segmentation masks are not used during training or testing. \\n- The Abdomen dataset is used for semi-supervised learning, where segmentation masks are provided only during training as auxiliary loss supervision and are not used during testing. \\n\\nThe unsupervised and semi-supervised settings remain consistent across all methods. Rather than reflecting inconsistency, this setup demonstrates AdaWarp\\u2019s generalizability across modalities (MRI in ACDC, CT in Abdomen) and input constraints (unsupervised in ACDC, semi-supervised in Abdomen).\\n\\n**For the ablation studies**, the key variables in AdaWarp are the spatial sampling rate ($s_s$) and the range sampling rate ($s_r$), which we have discussed in Section 5.1 of the discussion. Additionally, we analyzed the impact of varying $\\\\lambda$ in Figure 6 and thoroughly examined the accuracy-efficiency and accuracy-smoothness trade-offs. Please let us know if there are specific components or parameters you would like us to explore further.\\n\\n[Q2] Reply: Thank you for pointing this out. We acknowledge that the description was unclear, and we clarify it here. The segmentation network used is a pretrained network with weights from [1], and the pre-softmax feature maps from its output are used as feature maps for SAMConvex [2]. These feature maps are specifically used to compute the dissimilarity loss in SAMConvex. For details, please refer to their respective papers. Briefly, ConvexAdam [3] and SAMConvex are instance-optimization-based methods, requiring iterative optimization for each input pair rather than amortized optimization. ConvexAdam uses MIND [4] as feature maps, while SAMConvex relies on a contrastively pretrained network (which we lack), so we substitute the segmentation feature maps.\\n\\n- **How we use the segmentation feature maps:** We use the feature maps of moving and fixed images as input to Ada-Cost but continue to use raw images for dissimilarity loss computation. Thus, the feature map does not replace the guidance generator. For more details, refer to our reply to [C] for Reviewer UMDT.\\n- **Semi-supervised vs. Unsupervised:** Yes, this makes Ada-Cost a semi-supervised method on the abdomen dataset. However, other methods on this dataset also use auxiliary segmentation loss for supervision, so we believe this is fair. For clarity, the Ada-Cost used on the ACDC dataset relies solely on raw images as input, without any segmentation network or segmentation loss.\\n\\n[1] Liu, J., Zhang, Y., Chen, J.N., Xiao, J., Lu, Y., A Landman, B., Yuan, Y., Yuille, A., Tang, Y. and Zhou, Z., 2023. Clip-driven universal model for organ segmentation and tumor detection. ICCV 2023.\\n\\n[2] Li, Z., Tian, L., Mok, T.C., Bai, X., Wang, P., Ge, J., Zhou, J., Lu, L., Ye, X., Yan, K. and Jin, D., 2023, October. Samconvex: Fast discrete optimization for ct registration using self-supervised anatomical embedding and correlation pyramid. MICCAI 2023.\\n\\n[3] Siebert, H., Gro\\u00dfbr\\u00f6hmer, C., Hansen, L. and Heinrich, M.P., 2024. ConvexAdam: Self-Configuring Dual-Optimisation-Based 3D Multitask Medical Image Registration. TMI 2024.\\n\\n[4] Heinrich, M.P., Jenkinson, M., Papie\\u017c, B.W., Brady, S.M. and Schnabel, J.A., 2013. Towards realtime multimodal fusion for image-guided interventions using self-similarities. MICCAI 2013.\"}", "{\"title\": \"Part 3: Addressing [W2] and [Q3-5]\", \"comment\": \"[W2&Q3] Reply:\\nWe have added more references to image registration baselines in the related work and included comparisons with two recent multi-scale approaches, CorrMLP and RDP, both quantitatively and qualitatively. For qualitative results, please refer to Figure 4 and Figure 10 in the revised manuscript. A brief summary of key observations is provided in the third point of the **Author Response Summary**.\\n\\n[Q4] Reply: \\nWe apologize for not recording detailed timing information in the original manuscript. While timing depends on the model's parallel or sequential structure, inference and training times are generally correlated with the Multi-Adds (G) used. Since all models were trained under the same configuration, Multi-Adds (G) serves as a proxy for timing comparison. Additionally, we note that Ada-Cost converges faster than other methods. For example, on the abdomen dataset, Ada-Cost typically converges within 20\\u201330 epochs, whereas other methods may require 50\\u201380 epochs.\\n\\n[Q5] Reply: \\nThank you for raising this question, which ties closely to the qualitative results. The interpretability of AdaWarp can be summarized as follows:\\n- **Physical Prior**: AdaWarp is motivated by the piece-wise smooth assumption, prevalent in medical imaging. Unlike other learning-based methods that treat the model as a black box, AdaWarp introduces interpretability by constructing cost volumes after feature extraction and applying edge-preserving filtering to reflect local discontinuities. The qualitative results confirm that these discontinuities are effectively preserved. \\n- **Simpler Architecture**: Unlike methods relying on complex modules like self-attention or recurrent networks, AdaWarp uses only a handful of convolutional layers. These simpler mechanisms are more interpretable and easier to understand compared to advanced neural network components.\"}", "{\"title\": \"thanks for reply\", \"comment\": \"Thank you, authors, for your active engagement and thorough responses during the rebuttal phase. After carefully reading all the reviewers' comments alongside your replies, I decided to maintain my initial score finally.\"}", "{\"title\": \"Part 1: Addressing [A], [B] and [C]\", \"comment\": \"Dear Reviewer UMDT,\\n\\nWe sincerely thank you for your valuable comments. Below, we address your points [A], [B], [C]. \\n\\n[A] Reply: Thank you for pointing this out. We agree that the statement could be clarified. Specifically, traditional iterative methods can be categorized into two main types: continuous optimization-based and discrete optimization-based, each with distinct characteristics.\\n- **Continuous optimization:** These methods typically linearize the dissimilarity term, requiring a series of small iterative updates. Each step accounts for only a minor deformation, which leads to slow convergence and difficulty in handling large deformations.\\n- **Discrete optimization:** These methods are better suited for handling large deformations and require much fewer iterations to converge, making them favorable for datasets with large displacements. However, the primary drawback is their high memory consumption, which increases exponentially with the disparity volume size.\\n\\nBy \\\"the ability to incorporate contextual information,\\\" we refer to the advantage of learning-based methods in integrating label supervision, such as segmentation masks or keypoints. This integration is less straightforward in both continuous and discrete optimization-based approaches. While methods like ConvexAdam can incorporate segmentation masks generated by models like nnU-Net, such masks are unavailable without neural network-based learning. This limitation highlights the lack of contextual information typically inherent to iterative methods.\\n\\nWe hope this explanation clarifies our statement.\\n\\n[B] Reply: We appreciate the reviewer\\u2019s attention to this detail. The proposed AdaWarp method addresses deformable image registration by incorporating the Piece-wise Smooth (P-S) Assumption. In this context, adding one additional range dimension is sufficient to achieve the desired effect. Since most medical image registration problems involve single-channel inputs, a single range dimension can effectively represent voxel-wise intensity differences. With a higher sampling rate $s_r$, this approach enhances edge distinction and ensures adequate sensitivity to object boundaries. Expanding to more dimensions could indeed be interesting, allowing the representation of contextual differences beyond raw intensity differences. Such an extension could generalize self-attention from adaptive Gaussian filtering to a broader learnable adaptive filtering framework. However, this direction is outside the scope of the current paper.\\n\\n[C] Reply: Thank you for highlighting this. A visual explanation of the bilateral grid process for a one-dimensional signal is shown in Fig. 2. In this example, the original signal is represented as $f: x \\\\rightarrow \\\\mathbb{R}$, where $f(x)$ corresponds to the image intensity at position $x$. When projecting to a higher-dimensional space (from 1D to 2D here) with an additional range dimension (image intensity), the image is represented as $f: (x, r) \\\\rightarrow \\\\mathbb{R}$, where $r$ denotes the intensity at position $x$. Here, $f(x, r)$ represents the corresponding intensity before blurring ($f(x, r) = r$). The guidance map serves as the additional coordinate to access elements in the bilateral grid. Instead of using raw image intensities directly as this coordinate, we employ a trainable guidance map generator. This design allows the network to adaptively learn a more effective coordinate representation, improving the flexibility and performance of the model.\"}", "{\"comment\": \"Dear Reviewer TEi6,\\n\\nThank you for your engagement (if any) during the rebuttal phase. However, your decision to maintain the initial score, despite our detailed responses and revisions, **lacks constructive critique or justification**. Simply labeling our work as \\\"incremental\\\" without evidence or actionable feedback is **unhelpful and falls short of the professionalism expected in an ICLR review process**. We hope you reflect on this, as **fairness and transparency are principles** that the academic community ultimately upholds. \\n\\nSincerely, \\nThe Authors\"}", "{\"title\": \"Reply Part3\", \"comment\": \"[D&E] Thank you for the time and effort for these. I believe that all these should be either part of the paper or at least of the supplementary material.\\n\\n[I] Thank you very much for this clarification. Since Ada-Cost demonstrates superior performance (based on the table above), I am wondering what the purpose of Ada-Res is. Moreover, the need to adapt the method for the abdominal dataset raises the question if the method has to be adopted for every new dataset, especially for another modality or body part. I believe that this point needs to be examined again by the authors and to provide a better explanation in the paper.\"}", "{\"title\": \"Part 3: Addressing [D], [E] and [I]\", \"comment\": \"[D&E] Reply: Thank you for pointing out the lack of descriptions for $\\\\lambda$ values. We did not intentionally tune these hyperparameters but performed a grid search with $\\\\lambda = 0.01, 0.1, 1.0$, and $5.0$. We found $\\\\lambda = 0.01$ to be optimal for the ACDC dataset and $\\\\lambda = 1.0$ for the abdomen dataset (note: the $\\\\lambda = 5.0$ in the original manuscript was a typo). Most baseline methods use the same parameters and training settings as AdaWarp. Below, we provide more details.\\n\\n**ACDC Dataset:** \\nAll learning-based methods adopt the same hyperparameters as AdaWarp, with $\\\\lambda = 0.01$, MSE as the dissimilarity loss, and scaling-and-squaring with 7 steps for the diffeomorphic transformation model. For Figure 4, while keeping other hyperparameters the same, we vary computational complexity by adjusting the starting channel count in FourierNet, LKU-Net, and Ada-Res, and by modifying the backbone of TransMorph (tiny, small, and normal).\\n\\n**Abdomen Dataset:** \\nThe abdomen dataset presents more challenges due to the large displacement problem. To clarify:\\n- FourierNet, VoxelMorph, TransMorph, and Ada-Cost use local NCC as the dissimilarity loss with $\\\\lambda = 1.0$. ConvexAdam and SAMConvex also use $\\\\lambda = 1.0$, employing MIND and segmentation feature maps, respectively, to compute the dissimilarity. All these methods adopt scaling-and-squaring with 7 steps for the diffeomorphic transformation model. \\n- For TextSCF, we follow its original implementation with $\\\\lambda = 0.1$ and without the diffeomorphic transformation model. The $\\\\lambda = 0.1$ version with integration is also presented in Figure 6. \\n- Both LKUNet and LapIRN results in Table 2 use $\\\\lambda = 1.0$ without the diffeomorphic transformation model. Additionally, we ran counterparts using scaling-and-squaring with 7 steps for the diffeomorphic transformation, and the results are as follows:\\n\\n**Abdomen Dataset:**\\n| Model | Type | Dice (%) | HD95 (mm) \\u2193 | SDlogJ \\u2193 |\\n|------------------|-------|----------|-------------|----------|\\n| Initial | - | 30.86 | 29.77 | - |\\n| LKUNet | L | 52.78 | 20.56 | 0.98 |\\n| LKUNet (diff) | L | 52.08 | 20.34 | 0.28 |\\n| LapIRN | L | 54.55 | 20.52 | 1.73 |\\n| LapIRN (diff) | L | 51.39 | 20.89 | 0.06 |\\n\\nBoth methods show performance degradation in anatomical alignment, as measured by Dice, when required to produce smoother and more plausible deformation fields.\\n\\n\\n[I] Reply: Thank you for pointing this out. This question is similar to the first one raised by Reviewer RMNZ, and we address the essentials here.\\n\\n**Why:** We do not intentionally use different architectures; the choice is application-driven. As also mentioned in our reply to Reviewer RMNZ, the core of this paper is the AdaWarp module, which generalizes to different architectures as needed. \\nInitially, we developed Ada-Res for the ACDC dataset to achieve a reasonably good performance. However, applying Ada-Res to the abdomen dataset revealed suboptimal performance due to challenges like the aperture and large displacement problems. To address this, we tailored a solution for the abdomen dataset by incorporating image pyramids and discrete optimization, inspired by prior multi-scale approaches and ConvexAdam, leading to the development of Ada-Cost. Leveraging AdaWarp\\u2019s flexibility, we integrated these approaches effectively.\\nWhile we hadn\\u2019t previously tested Ada-Cost on the ACDC dataset, we have now conducted these experiments as part of the rebuttal. The results are as follows:\\n\\n**Cardiac Dataset:**\\n| Model | Avg. (%) | RV (%) | LVM (%) | LVBP (%) | HD95 (mm) \\u2193 | SDlogJ \\u2193 |\\n|------------------|----------|--------|---------|----------|-------------|-----------|\\n| Initial | 58.14 | 64.50 | 48.33 | 61.60 | 11.95 | - |\\n| Ada-Res (Ours) | 79.20 | 78.14 | 76.31 | 83.15 | 8.33 | 0.050 |\\n| Ada-Cost (Ours) | 79.82 | 77.58 | 77.95 | 83.92 | 8.98 | 0.050 |\\n\\nThis is interesting, as methods that perform well on ACDC may struggle on more challenging datasets like the abdomen dataset, whereas methods designed for challenging datasets like the abdomen dataset can be easily adapted to simpler datasets like ACDC. Note that for Ada-Cost results on ACDC, raw images were used as input, without feature maps from a segmentation network.\"}", "{\"title\": \"Reply to Part 3: Addressing [W2] and [Q3-5]\", \"comment\": \"Thank you for your effort and for providing additional figures. I truly appreciate the improvements made to the manuscript. However, the visual results do not clearly demonstrate that the proposed method achieves significantly sharper boundaries compared to the baselines. For instance: Figure 4: While there is a slight improvement in boundary sharpness at the top of the liver (visible only after zooming), the boundary constraints appear to fail in regions like the ribs when compared to ConvexAdam. Figure 10: I struggled to observe any noticeable differences in boundary sharpness between the proposed method and the baselines.\", \"suggestions\": \"It is unclear whether these subtle differences stem from the choice of images for visualization or limitations in the method itself. If it is the former, I recommend selecting a subject that better highlights the advantages of your approach. If it is the latter, consider revising your narrative to align more closely with the actual performance of the method. Either option would be acceptable.\\n\\nAdditionally, I suggest visualizing the warped images alongside correlated metrics with zoomed-in views to make the results more evident. This approach would make it easier for readers to understand your point. You might find Figure 5 in \\\"NODEO: A Neural Ordinary Differential Equation Based Optimization Framework for Deformable Image Registration\\\" and Figure 3 in \\\"A Plug-and-Play Image Registration Network\\\" to be helpful references for presenting such visualizations.\\n\\nOverall, the paper is progressing well. If you can address these points, I would be happy to reconsider and potentially raise my score.\"}", "{\"title\": \"Author Response Summary\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely thank you all for your valuable comments, which have greatly helped us improve the clarity and presentation of our manuscript. We have uploaded a revised version incorporating most of your comments and suggestions. Below, we summarize the key clarifications and major changes made to the manuscript for your reference.\\n\\n1. **Code Release**: \\n In response to comment [J] from Reviewer **UMDT** and for the benefit of potential readers, we confirm that the source code will be released immediately upon acceptance of the paper.\\n\\n2. **Novelty**: \\n We sincerely thank Reviewer **UMDT** and **RMNZ** for acknowledging the novelty and potential future impact of this work on the image registration community. We also appreciate Reviewer **MgEu** for recognizing that we have largely addressed their concerns regarding the research focus and the utility of the piece-wise smooth assumption. For Reviewer **TEi6** and potential future readers, we summarize the key *technical innovations* of this manuscript below:\\n - **Physical Prior**: Current learning-based registration frameworks lack an end-to-end learnable approach to integrate the physical prior, i.e., the piece-wise smooth assumption, into neural networks, resulting in suboptimal performance. While this assumption is well-suited to medical images, none of existing methods have adequately addressed it. \\n - **Differentiable Bilateral Grid**: To our knowledge, no prior work has proposed an end-to-end trainable bilateral grid with learnable adaptive filtering. The closest existing method, DeBG (discussed in the manuscript), employs channel shuffling instead of real splatting for the range dimension, leading to suboptimal results. \\n - **Registration Performance**: By integrating AdaWarp, we achieve state-of-the-art registration performance using a simple network architecture with only a handful of convolutional layers. This surpasses existing learning-based models and could serve as a new milestone in image registration, providing a strong baseline with significant potential for further improvement.\\n\\n3. **Qualitative Results**: \\n In response to comment [K] from Reviewer **UMDT** and comments [W2] and [Q3] from Reviewer **RMNZ**, as well as for the benefit of other reviewers, we have included qualitative results for both datasets in the revised manuscript. Please refer to Figure 4 for the abdomen dataset and Figure 10 (in the appendix) for the cardiac dataset. Below, we briefly summarize key observations:\\n - **Summary**: Our method achieves a piece-wise smooth displacement field, ensuring smooth displacements within regions while allowing differences between regions based on local motion.\\n - **Cardiac Dataset**: AdaWarp performs better at the boundary between the right ventricle and left ventricle myocardium, where other methods show disorganized displacements. Additionally, unlike other methods that display two shrinking centers during registration from the end-diastole to end-systole phase, AdaWarp more realistically produces a single center.\\n - **Abdomen Dataset**: AdaWarp effectively handles large deformations and captures local discontinuities. The sharp changes near boundaries (e.g., left/right kidneys, liver, and between the body and background) demonstrate its ability to incorporate the piece-wise smooth prior.\\n\\n4. **Further Clarification on $\\\\lambda$**: \\n In the first version of the manuscript, we reported that Ada-Cost achieved a Dice score of 62.74%, which was based on $\\\\lambda=5.0$. In accordance with other baseline methods, we have updated the table with results for $\\\\lambda=1.0$. For tables in both datasets, all learning-based methods (except TextSCF) were trained under the same configuration, using scaling-and-squaring with 7 steps for diffeomorphic transformations.\\n\\n5. **Further Clarification on Baselines**: \\n Since AdaWarp focuses on addressing challenges in learning-based registration frameworks, we have removed traditional iterative methods from the revised manuscript to avoid confusion. Their inclusion or exclusion does not impact the claims of the paper. However, we note that traditional iterative methods are significantly harder to tune compared to learning-based approaches, which we consider a key limitation of such methods.\"}", "{\"comment\": \"In your manuscript, the concept of sharp boundaries stems from statements such as:\\n\\\"Previous studies have shown that incorporating prior knowledge improves the.......distinct boundaries often exist between organs and the background or neighboring organs........while clear and well-defined boundaries are formed by intensity differences between these regions (see Fig. 1, columns 1&2). These consistent intra-region smoothness and inter-region boundaries indicate that certain medical images exhibit piece-wise smooth structures.\\u201d\\nIf I understand correctly, your P-S assumption is based on the idea that within organs, the registration field should be smooth, while at the boundaries of organs, there should be sharp transitions. In my review, when I refer to \\\"sharp boundaries,\\\" I am aligning with the terminology you use\\u2014\\\"clear, well-defined boundaries.\\\"\", \"my_main_confusions\": \"1. Your P-S assumption is motivated by the prior knowledge that \\\"distinct boundaries often exist between organs and the background or neighboring organs.\\\" However, your results do not guarantee the preservation of these boundaries. If the P-S assumption inherently does not guarantee sharp boundaries, why are boundaries a key component of your prior? Furthermore, you mention related works \\u201cTo address these discontinuities, some works have employed bilateral filters [27, 28], which preserve edges and improve registration performance in the presence of local discontinuities\\u201d. It seems inconsistent that these prior approaches preserve boundaries, but your method, which should be better, cannot guarantee them.\\n2. Based on your recent reply, I am also unclear about how you define \\\"local discontinuities\\\" in your work. My understanding is that local discontinuities refer to abrupt changes in the displacement field, often representing anatomical or structural boundaries. If your method is designed to preserve local discontinuities, how can it fail to guarantee sharp boundaries? \\n\\nRegarding the evaluation of sharp boundaries, I acknowledge that it can be challenging to quantify this numerically. However, at the very least, the manuscript should provide qualitative evidence where the proposed method demonstrates a clear advantage over the baselines. For example, you could present cases where other baselines fail to preserve sharp boundaries (e.g., instances where organs appear fused together), and your method successfully delineates clear boundaries between those organs.\\n\\nI hope this reply effectively conveys my concerns. If I have misunderstood the logic or interpretation of your paper, please do not hesitate to point it out. I am open to clarifications and look forward to better understanding your work.\"}", "{\"title\": \"Part 1: Addressing Concerns in Weakness Section\", \"comment\": \"Dear Reviewer MgEu,\\n\\nWe sincerely thank you for your valuable comments. Below, we address your concerns listed in the weakness section as [W1] and [W2], and answer your questions [Q1] and [Q2].\\n\\n[W1] Reply: We would like to emphasize that the focus of this study is not solely on improving registration efficiency, but on introducing a novel neural network architecture that achieves the best overall performance in image registration. This overall performance cannot be evaluated using a single surrogate metric, such as Dice. Instead, it requires a multidimensional comparison, taking into consideration of factors like accuracy-efficiency and accuracy-smoothness tradeoffs. We believe these aspects have been overlooked in previous studies, and we elaborate on them below.\\n1. **Accuracy-efficiency:** Evaluating accuracy or efficiency in isolation is insufficient. Our model demonstrates that with similar accuracy, it achieves lower computational complexity, and with similar computational complexity, it achieves higher accuracy. This balance is essential to claim better overall performance.\\n2. **Accuracy-smoothness:** A key advantage of learning-based methods is their ability to integrate label supervision. However, without adequately handling the smoothness of the deformation field, they can produce implausible or unrealistic deformations, even with high Dice scores. For example, using a pretrained nnU-Net for segmentation achieves over 90% Dice on ACDC and multi-organ abdomen segmentation tasks. If we then perform label matching [1] directly on the predicted masks, the Dice score improves further, but the resulting deformation field is often unrealistic due to the lack of smoothness and consideration of image textures. \\n\\nWe hope this explanation addresses your concerns.\\n\\n[W2] Reply: Thank you for pointing this out. We would like to clarify that the cardiac MRI and abdomen CT datasets are indeed suitable testbeds for the proposed piece-wise smooth assumption. However:\\n\\n1. **Brain MRI datasets and the piece-wise smooth assumption:** Brain MRI datasets do not violate the piece-wise smooth assumption. For adjacent regions with smooth transitions (e.g., left and right thalamus), these regions are treated as a single \\\"piece\\\" under the assumption, similar to existing models. For regions with clear boundaries (e.g., cerebral white matter and cortex), our model can perform better by explicitly handling such transitions.\\n2. **Relative difficulty of brain MRI datasets:** Brain MRI datasets are generally easier compared to cardiac and abdomen datasets. As noted in the second paragraph of the introduction in [2], abdominal scans are more complex than brain scans. Cardiac datasets exhibit local discontinuities and sliding motions, which are absent in brain MRI. Multi-organ abdomen datasets pose additional challenges:\\n - The **aperture problem** [3], arising in homogeneous or textureless regions, where the limited local evidence within a small window (defined by the network's *effective receptive field (ERF)* [4]) restricts accurate displacement estimation.\\n - The **large displacement problem**, where the displacement of a small structure between image pairs exceeds its own size, making accurate alignment more challenging.\\n\\nWe hope this clarifies our rationale. \\n\\n[1] Durrleman, S., Prastawa, M., Charon, N., Korenberg, J.R., Joshi, S., Gerig, G. and Trouv\\u00e9, A., 2014. Morphometry of anatomical shape complexes with dense deformations and sparse parameters. NeuroImage 2014.\\n\\n[2] Heinrich, M.P., 2019. Closing the gap between deep and conventional image registration using probabilistic dense displacement networks. MICCAI 2019.\\n\\n[3] Horn, B.K. and Schunck, B.G., 1981. Determining optical flow. Artificial intelligence 1981.\\n\\n[4] Luo, W., Li, Y., Urtasun, R. and Zemel, R., 2016. Understanding the effective receptive field in deep convolutional neural networks. NeurIPS 2016.\"}", "{\"comment\": \"Thanks to the authors for their detailed and insightful response to the concerns I raised. After thoroughly considering your explanations, I have decided to increase my original score.\"}", "{\"summary\": \"The paper proposes a novel method to utilise prior knowledge (piece-wise smooth assumption) to enhance learning based registration striking a balance between computational complexity and accuracy. The performance is evaluated on a cardiac and an abdominal dataset.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper presents AdaWarp, a novel method that integrates the piece-wise smoothness assumption enforcing global smoothness while respecting local discontinuities in a learning framework striking a balance between complexity and accuracy.\\n\\nMoreover, it demonstrates connections of the adaptive filtering approach with the self attention.\\n\\nThe experimentation on two challenging registration tasks cardiac and inter-subject abdominal registration demonstrate that AdaWarp outperforms existing methods in accuracy-efficiency and accuracy-smoothness tradeoffs.\", \"weaknesses\": \"Although I believe that the paper attempts to bridge a gap in the literature by incorporating a differentiable bilateral grid within a learning-based registration framework, I would like to point out several weaknesses and raise some questions regarding the experiments.\\n\\n[A] I would like to invite the authors to elaborate on this statement regarding iterative optimization-based methods: \\u201cAs a result, these approaches tend to be time-consuming and lack the ability to incorporate contextual information effectively.\\u201d\\n\\n[B] \\u201cWhile high-dimensional filtering can project signals onto arbitrary spaces, we focus on extending by one additional dimension to account for the object boundary.\\u201d\\n\\nWhat is the intuition behind this approach? Is only one additional dimension sufficient? I would like to invite the authors to further elaborate and explain their choice.\\n\\n[C] The role of the guidance map generator component is unclear. Could the authors please explain why this component is used or needed?\\n\\n[D] Could the authors clarify whether the same lambda values are used for all methods or if different values are applied? How were these values tuned? Were they also tuned for the baselines?\\n\\n[E] The proposed method utilizes a diffeomorphic transformation model; however, it is not clear whether the baselines follow the same principle. Could the authors provide a table that explicitly lists the hyperparameters used by each of the baselines along with the transformation model?\\n\\n[F] The authors chose different baselines for the two datasets, which is puzzling. What is the intuition behind this decision? Is there a reason why this approach was chosen?\\n\\n[G] The paper presents t-tests for DICE scores but not for other metrics. Is there a reason for this choice? Could the authors extend their t-tests to cover HD95 as well?\\n\\n[H] \\u201cLearning-based methods generally outperform traditional ones in registration accuracy, though with slightly higher SDlogJ.\\u201d\\n\\nDo the authors have any intuition as to why this is the case? Normally, I would expect that iterative optimization methods achieve higher accuracy [1].\\n\\n[1] Hansen, L. and Heinrich, M.P., 2021. Revisiting iterative highly efficient optimization schemes in medical image registration. In Medical Image Computing and Computer-Assisted Intervention\\u2013MICCAI 2021: 24th International Conference, Strasbourg, France, September 27\\u2013October 1, 2021, Proceedings, Part IV 24 (pp. 203-212). Springer International Publishing.\\n\\n[I] For the abdominal dataset, the proposed method uses Convex Adam\\u2019s framework with the same segmentation model as a feature extractor. Is there any reason for this choice? Could the model be trained from scratch? Could the authors elaborate on the design choices, including why the architecture differs depending on the dataset?\\n\\n[J] The code is not available. Are the authors planning to make their code publicly accessible?\\n\\n[K] Due to the lack of ground truth, registration is evaluated quantitatively with surrogate measures. However, to ensure the registration\\u2019s success, it is common practice to inspect the resulting transformed images qualitatively as well. I would like to invite the authors to provide qualitative results for both datasets, as this would substantially strengthen their claims.\", \"questions\": \"I encourage the authors to consider addressing as many of the points highlighted in the weaknesses section as possible. Additionally, while the paper presents an intriguing and novel approach, the clarity and quality of the presentation could benefit from further refinement.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply Part 2\", \"comment\": \"[F] As one of the other reviewers pointed out, I believe that the results and discussion section is a bit unclear and difficult to understand. With these clarifications, I understand why they chose each method, but in my opinion, this should be included in the paper. However, I still find it quite peculiar that both datasets use different baselines. I would expect to see all the baselines for both datasets justifying why each did not work for each dataset.\\n\\nMoreover, I am aware that Abdominal datasets can be challenging, but I am not convinced that none of the ANTs, Demons, or B-spline can yield reasonable performance with careful hyperparameter tuning. \\n\\nThe last remark for this point is that ANTs are a framework and not a transformation. One can, for example, use b-spline or SyN within ANTs framework. \\n\\n[H] Thank you for the clarifications on this point. I can now see the rational behind the writing.\"}", "{\"title\": \"Reply to Part 1.\", \"comment\": \"I would like to thank the authors very much for engaging in the rebuttal and for their detailed responses and clarifications.\\n\\n[A] I am not sure if I am not understanding this correctly, but there were segmentation algorithms before deep learning, and at the same time, iterative optimization methods allowed multi-channel registration where one could incorporate contextual information (e.g. segmentation maps) along with the images. If misunderstanding is the case I think further clarification might be required in the paper to clarify this point.\\n\\n[B&C] I appreciate the clarification of these points. I strongly believe that these clarifications are missing from the paper and they make them stronger, clearer and more easy to follow. As a result I would like to invite the authors to include them in the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reply to Reply to Part 3: Addressing [W2] and [Q3-5]\", \"comment\": \"Dear Reviewer RMNZ,\\n\\nThank you for your valuable feedback. We would like to clarify and ensure that we have correctly understood your comments. Please feel free to correct us if there are any misunderstandings.\\n\\n**Main Claims**: \\nWhile we understand that you are particularly focused on the expectation of **sharp boundaries** in the proposed method's results, we want to emphasize that **we did not claim in either the original or revised manuscript that the proposed method guarantees sharp boundaries in the warped moving image or the displacement field**. \\n\\nOur main focus remains on demonstrating that the proposed method achieves a piece-wise smooth displacement field and has the ability to preserve local discontinuities. Although we observed sharp boundaries in the produced displacement field (as described in Section 4.2.2 of the revised manuscript), this is not a specific claim of the manuscript or in the OpenReview discussion.\\n\\n**Piece-wise Smooth**: \\n- To help you and potential readers better understand, the piece-wise smooth assumption does not inherently imply sharp boundaries. While sharp boundaries in a displacement field indicate smooth regions divided by them, a piece-wise smooth field does not necessarily exhibit sharp boundaries.\\n- For example, consider two adjacent organs connected by tissue. While their intensities within each organ may differ, the movement across their boundary can remain smooth because of the connective tissue. In this case, the piece-wise smooth assumption does not result in sharp boundaries.\\n- Another example of sharp boundaries in a displacement field occurs when sliding motion happens between an organ and the surrounding abdominal cavity, such as between the liver and kidney, as demonstrated in the qualitative results.\\n- If the displacement field remains smooth within each respective region while allowing discontinuities across boundaries, it can effectively **reduce artifacts**. This is demonstrated by the fact that other baseline methods produce multiple shrinking centers in the displacement field of the right ventricle, whereas Ada-Cost produces only one center.\\n\\nTo better address your concerns, we would like to seek clarification on the following points:\\n\\n**Sharp Boundary Results**: \\n - You have mentioned \\\"sharp boundaries\\\" multiple times in our discussion, but the term seems ambiguous. Could you clarify what you specifically mean by sharp boundaries? Are you referring to sharp boundaries in the warped moving image or in the displacement field? \\n - While we acknowledge that the integrity of ribs warped by Ada-Cost may not be better than ConvexAdam, we attribute this to the displacement field around the rib and background being excessively sharp, rather than insufficiently sharp, leading to rib shrinking. Thus, our question is: how do you define or measure the sharpness of these boundaries? Specifically, what criteria determine whether one result is *sharper* than another? \\n\\nWe hope these questions help clarify your perspective so we can address your concerns more effectively. Once we receive your feedback, we are happy to provide additional qualitative results if necessary. Thank you again for your valuable insights.\"}", "{\"comment\": \"Thank you for your understanding.\\nWe acknowledge that it was our oversight to pre-assume that readers would share our consensus that discussions about image registration discontinuities commonly refer to the displacement field. This should have been made clearer in the manuscript. \\n\\nIn addition, we would like to emphasize that a better ability to preserve local discontinuities is only one of the benefits of incorporating the P-S prior. \\nAs the name \\\"piece-wise smooth\\\" suggests, it involves two key aspects: \\n1. The differences between pieces, reflected by the boundaries. \\n2. The smoothness within each piece. \\n\\nAs demonstrated in Figure 10 of the revised manuscript on the cardiac dataset, Ada-Cost produces more realistic displacement fields. While all methods generate smooth fields within the left ventricle blood pool and left ventricle myocardium, only Ada-Cost produces a smooth field in the right ventricle by having a single shrinking center, matching realistic cardiac motion. This is further validated by the increase in the Dice score for the right ventricle achieved by both Ada-Cost and DeBG, which also incorporates the P-S prior.\"}", "{\"comment\": \"Dear Reviewer UMDT,\\n\\nThank you for your response and for acknowledging the interest this work holds for the registration community. However, as your comments continue to raise vague and subjective arguments, we feel it necessary to respond directly and clarify our position. \\n\\n> *\\\"However, due to the amount of changes in the manuscript and the way that reviewers do/do not engage in the rebuttal I believe that the paper needs another round of reviews to ensure that the quality after the additions is met.\\\"* \\n\\nAs you are so **rigorous**, could you specify **point by point** which changes require another round of reviews? It is unclear why you would resist reviewing a substantially improved manuscript or why these changes cannot be adequately assessed in this round. \\n\\n> *\\\"More specifically, I am not persuaded about the choice of different baselines yet. In my opinion you could have chosen some common baselines that work for both and then support with strong arguments why you choose the different ones.\\\"* \\n\\nDidn\\u2019t we already include common baselines like VoxelMorph and newer ones like CorrMLP? If you disagree with our approach, please clarify your reasoning with examples of baselines you believe should have been included and explain how they would strengthen the study. \\n\\n> *\\\"Similarly, I am not sure I agree that the iterative methods are more difficult to tune and I certainly do not agree with the choice of removing them because they do not work.\\\"* \\n\\nThis statement appears vague and defensive, as though you may be a developer defending iterative methods rather than evaluating our paper. Nowhere in the manuscript do we claim that learning-based methods are universally better than traditional iterative methods. In fact, we explicitly state that in our response, learning-based methods often achieve similar or inferior performance compared to traditional methods in unsupervised settings. \\n\\n> *\\\"Although you hypothesise they probably wouldn\\u2019t affect the final result, this is not proven and in addition just removing them doesn\\u2019t show a good scientific practice in my opinion.\\\"* \\n\\nWhat exactly do you believe needs to be proven? Our focus is on addressing gaps in learning-based methods and comparing them with other learning-based approaches. Including or excluding iterative methods does not change the core contributions of this paper, which are rigorously demonstrated. Simplifying the manuscript by removing confusing elements improves clarity and accessibility for readers, not the opposite. \\n\\nFinally, we want to emphasize that we are not here to argue about the merits of iterative versus learning-based methods. In fact, much of this work draws inspiration from efforts to bridge the gap between the two approaches such as [1][2][3]. Your continued focus on defending iterative methods rather than engaging with the actual contributions of the paper seems counterproductive to the review process. \\n\\nWe encourage you to reflect on these points and reconsider your evaluation to ensure it is **transparent, fair, and objective**. The discussion period is ending soon, and we hope you can provide actionable feedback that reflects the contributions and impact of our work in a professional and constructive manner. \\n\\nSincerely, \\nThe Authors \\n\\n[1] Siebert, H., Gro\\u00dfbr\\u00f6hmer, C., Hansen, L. and Heinrich, M.P., 2024. ConvexAdam: Self-Configuring Dual-Optimisation-Based 3D Multitask Medical Image Registration. IEEE Transactions on Medical Imaging.\\n\\n[2] Heinrich, M.P., 2019. Closing the gap between deep and conventional image registration using probabilistic dense displacement networks. In Medical Image Computing and Computer Assisted Intervention\\u2013MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13\\u201317, 2019, Proceedings, Part VI 22 (pp. 50-58). Springer International Publishing.\\n\\n[3] Heinrich, M.P., Papie\\u017c, B.W., Schnabel, J.A. and Handels, H., 2014. Non-parametric discrete registration with convex optimisation. In Biomedical Image Registration: 6th International Workshop, WBIR 2014, London, UK, July 7-8, 2014. Proceedings 6 (pp. 51-61). Springer International Publishing.\\n\\n[4] Jena, R., Sethi, D., Chaudhari, P. and Gee, J.C., 2024. Deep Learning in Medical Image Registration: Magic or Mirage?. arXiv preprint arXiv:2408.05839.\"}", "{\"summary\": \"The paper presents AdaWarp, a novel architecture in medical image registration. The model introduces a piece-wise smooth (P-S) assumption, which exploits the smoothness of intensity variations within anatomical regions while preserving sharp boundaries between organs. This assumption is incorporated into the network through a differentiable bilateral grid, which allows for efficient edge-preserving filtering and reduces computational complexity.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The integration of the differentiable bilateral grid into the deep learning framework for image registration is highly innovative. It effectively addresses the limitations of traditional smoothness constraints, enabling the model to better handle complex and localized deformations.\\n\\n2. The paper is well-structured, offering a clear explanation of the proposed methods. It provides detailed descriptions of the differentiable bilateral grid, encoder architecture, and adaptive filtering process. Visual aids, such as Figures 4 and 5, are particularly useful in clarifying complex comparisons.\\n\\n3. This method presents a promising alternative for resolving the conflict between global smoothness and local deformations, potentially offering improved solutions in certain applications.\", \"weaknesses\": \"The weaknesses of the paper are primarily in the literature review and experimental sections, which lack sufficient references and baseline comparisons, as well as visual results. These limitations are why I rated the paper as \\\"fair\\\" in terms of Presentation and Soundness.\\n\\n\\n1. The paper needs more references in the literature review. The current review only discusses works that do not address the conflict between global smoothness and local deformations. However, this is not the first paper to tackle this problem. Research such as multi-scale registration and patch-wise registration also offers relevant solutions. While these methods may not explicitly incorporate the piece-wise smooth prior, they still manage local deformations while maintaining overall smoothness. The authors should include these references in the background and select baselines from this body of work to show that the proposed method offers a superior solution to the problem.\\n\\n2. The experiments do not adequately support the claimed advantages of the proposed method. While the paper argues that the model can generate sharp boundaries between organs by incorporating the P-S assumption, it fails to provide visual results to substantiate this key contribution. Relying solely on numerical metrics like Dice, HD95, and SDlogJ does not clearly demonstrate that the model\\u2019s output preserves sharp boundaries.\\n\\n3. The writing in the experiments section is somewhat disorganized. The authors employ significantly different model structures and training strategies, including both unsupervised and semi-supervised approaches (which require further clarification), depending on the dataset. This inconsistency raises concerns about the generalizability of the model across different tasks. Additionally, the experiments lack ablation studies, which are necessary to demonstrate the effectiveness of each component in the proposed methods.\", \"questions\": \"1. Why were different model structures used for different datasets? What would be the result of using Ada-Res on the Abdomen CT dataset and Ada-Cost on the ACDC dataset? A comparison of these model structures across datasets could help demonstrate their generalizability and clarify why different architectures were chosen for each.\\n\\n2. In the Abdomen CT dataset, Ada-Cost uses \\u201cthe same segmentation model for feature extraction.\\u201d Was this segmentation model pre-trained? If so, this would make Ada-Cost a semi-supervised registration model. Comparing it with other unsupervised deep learning-based methods would be unfair. Additionally, how exactly was the segmentation model integrated into your model\\u2019s structure? Does it replace the \\\"guidance map generator,\\\" or is it incorporated elsewhere in the architecture?\\n\\n3. More references, more baselines and visual evaluations of warped images and warped segmentation masks would be highly valuable. Providing such visual results would help demonstrate the effectiveness of your method in producing sharp boundaries, which cannot be fully illustrated through numerical metrics alone.\\n\\n4. I would greatly appreciate it if the paper could provide information on the inference and training time of the proposed method. This data would offer more valuable insights into the computational efficiency of the model.\\n\\n5. Another concern is that the authors selected \\\"interpretability and explainable AI\\\" as Primary Area. I\\u2019m not sure if this is appropriate since there is no work about interpretability of proposed method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Final Review Comments\", \"comment\": \"Thanks for the authors' effort in addressing my concerns. Overall, my concerns have been largely addressed, although I am still not convinced that brain MRI registration is an easier task than cardiac and abdomen registration. Anyway, the evaluation in the cardiac and abdomen datasets is fine for this study. Nevertheless, after reading the comments from other reviewers, I think it's hard to increase my original score, so I decided to maintain it.\"}", "{\"title\": \"Final words and thoughts\", \"comment\": \"Dear authors,\\n\\nFirst of all I would like to thank you for engaging with the rebuttal and for improving your manuscript taking into consideration all the reviewers comments and for providing quantitative results. As I said in my comments above I believe that this work will be interesting for the community.\\nHowever, due to the amount of changes in the manuscript and the way that reviewers do/do not engage in the rebuttal I believe that the paper needs another round of reviews to ensure that the quality after the additions is met. Moreover, I still find the that the written quality of the manuscript should be improved, especially the results and discipline which I still find not optimally organized and discussed. \\nMore specifically, I am not persuaded about the choice of different baselines yet. In my opinion you could have chosen some common baselines that work for both and then support with strong arguments why you choose the different ones. Similarly, I am not sure I agree that the iterative methods are more difficult to tune and I certainly do not agree with the choice of removing them because they do not work. Although you hypothesise they probably wouldn\\u2019t affect the final result, this is not proven and in addition just removing them doesn\\u2019t show a good scientific practice in my opinion.\\n\\nGiven all the above I am not raising my score. I believe this work has the potential to be a great paper for the registration community but I believe it needs to be worked a bit more in depth to improve and polish its experimentation.\"}", "{\"title\": \"Final remarks\", \"comment\": \"As I said before, I want to thank the authors for engaging in the rebuttal. I believe that this is a novel work that brings merit in the field of registration. However, I believe that the paper is not very well organised, and as a result, this makes it unclear and difficult to follow. Many of the responses of the authors can be incorporated into the methods and results sections. Due to the amount of changes and additions, I believe that the paper would require another round of reviews to assess it.\\n\\nLast but not least, a very important point for me is, as I said in [K], the quantitative results (warped images, difference images, deformation field visualisations) as these allow all of us who work on registration to assess the quality along with the surrogate quantitative measures.\\n\\nGiven all the above, without discouraging the authors (I still believe that the method is novel), I am not going to change my score with the hope that this paper will improve through this process and will be resubmitted later somewhere else.\"}", "{\"title\": \"Part 4: Addressing [G], [J], [K], and Further Clarification of [A]\", \"comment\": \"[G] Reply:\\nWe have included t-tests for both Dice and HD95 metrics in the revised manuscript.\\n\\n[J] Reply:\\nAs noted in the first point of the **Author Response Summary**, we will release the code immediately upon the paper's acceptance.\\n\\n[K] Reply: \\nFor qualitative results, please refer to Figure 4 and Figure 10 in the revised manuscript. A brief summary of key observations is provided in the third point of the **Author Response Summary**.\\n\\n[A] Further Clarification:\\nThank you for your feedback on our response to your comment [A]. We have updated the manuscript to reflect the following discussion:\\n\\n- **Segmentation Accuracy:** \\n While there is ongoing debate on whether learning-based registration methods outperform traditional iterative methods, deep learning has undeniably dominated segmentation. Thus, existing iterative methods still rely on segmentation maps generated by deep learning, which provide superior contextual information compared to traditional segmentation algorithms.\\n\\n- **Amortized Optimization:** \\n Traditional iterative methods can incorporate contextual information (e.g., segmentation maps) but require instance-wise optimization, meaning new segmentation masks and energy function optimization are needed for every unseen image pair. In contrast, learning-based methods utilize amortized optimization, requiring only a single network trained on a cohort of image pairs, making them more efficient and scalable.\"}", "{\"comment\": \"Dear Reviewer UMDT,\\n\\nThank you for your thoughtful critiques, which have greatly helped us improve the manuscript. As the deadline for the discussion phase approaches, we encourage you to provide any final feedback to address remaining concerns. After the deadline, we will no longer be able to respond, so your timely engagement is highly valued. \\n\\nTo assist you in conducting **a subjective, transparent, and professional evaluation** of our work, we provide the following clarifications and reflections. Noting your **earlier unfamiliarity with the ICLR review procedure and timeline**, we hope this facilitates consensus and ensures the **real contributions and impact of our work are accurately reflected**. \\n\\n1. **Addressed Concerns**: \\n - We believe your prior comments **[A][B][C][D][E][G][H][I][J][K] have been addressed**. Some of these were explicitly acknowledged as resolved in your earlier responses, while others we believe have been addressed through the changes incorporated into the revised manuscript and summarized in the **Author Response Summary**. \\n - If you feel any of these points remain unresolved, please let us know, and we will be happy to provide additional clarification. \\n2. **Discussion on Baselines ([F])**: \\n - While we acknowledge we may hold differing opinions on the choice of baselines for the two datasets, we believe this does not detract from the original contributions of the paper. We clarify the following points to ensure alignment: \\n - **Iterative Methods**: Traditional iterative methods have been removed from the revised manuscript to avoid confusion. Their inclusion or exclusion does not affect the claims of the paper. Additionally, we note that such methods are significantly harder to tune compared to learning-based approaches, which we view as a key limitation of iterative methods. \\n - **Dataset-Specific Methods**: Some learning-based methods used in ACDC and others in the abdomen dataset were chosen based on their design specificity and relevance to the respective tasks. Given the long training times for image registration frameworks, only methods designed for general use were evaluated across datasets. While including all methods for all datasets may aid in understanding, excluding certain task-specific methods does not impact the main claims or the comparison with state-of-the-art methods. \\n\\nBased on the above, we hope these clarifications address any remaining concerns. We kindly encourage you to reconsider and **ensure your final score reflects the contributions and broader impact of our work**. Your feedback has been invaluable, and we deeply appreciate your time and engagement. \\n\\nSincerely, \\nThe Authors\"}", "{\"metareview\": \"This paper presents a method for deformable medical image registration. The main contribution of the authors is to introduce P-S assumption to enforce global smoothness while respecting local discontinuities. While this assumption is newly introduced to deep learning based registration, it is not a new concept and has been used in traditional registration as well as other deep learning tasks. Therefore, I do agree with reviewers that the novelty of this work is limited. The reviewers also have concern on the presentation of the paper. As pointed out by reviewer UMDT, \\\"Many of the responses of the authors can be incorporated into the methods and results sections. Due to the amount of changes and additions, I believe that the paper would require another round of reviews to assess it.\\\"\\nI also realized that this paper use cardiac and abdomen datasets in their experiments. However, a large amount of deformable registration work deals with brain datasets (which are publicly available). I would suggest the authors to include these datasets to improve the generality of the work. \\nI have also considered the concerns from the authors regarding some of the reviewer comments. I agree that some reviewers may have high standard/requirements for papers. However, the rebuttal is mainly used to clarify the potential misunderstanding of the papers, instead of getting reviewers to help revise or even rewritten the papers. Therefore, I believe it is the responsibilities of the authors to make the paper clear and convincible to the audience. \\n\\nGiven these weaknesses and the fact that the overall scores of the paper are not high, I cannot recommend to accept this paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised some issues of the paper which they feel the required changes is too much for a simple revision. Therefore, they believed that a resubmission to next venue with another round of review is more appropriate. The reviewers also felt that the contribution is not very strong, which is a subjective evaluation though.\"}", "{\"title\": \"Reply to Reply to author's comment\", \"comment\": \"Dear Reviewer RMNZ,\\n\\nThank you for your feedbacks. We would like to provide a gentle reminder of the ICLR submission-review timeline, as outlined in the email from the ICLR program chairs.\\n\\n**Timeline:** \\nWe are still in the discussion phase and have not yet responded to all your comments or uploaded our revised manuscript. The timeline is as follows: \\n- **November 27th**: Last day to upload a revised PDF. After this date, only forum replies are allowed (no manuscript changes). \\n- **December 2nd**: Last day for reviewers to post messages to authors (six-day extension). \\n- **December 3rd**: Last day for authors to post messages on the forum (six-day extension). \\n\\n> *\\\"As mentioned in Weakness 2 and Question 3, there remains insufficient evidence to demonstrate that the proposed method can achieve registration results with sharp boundaries (no visual results).\\\"*\\n\\nWe appreciate your patience as we work to address everyone\\u2019s every concern, including your specific feedback on qualitative results. We are actively working on this and will provide updates as soon as possible. Thank you for your understanding.\"}", "{\"comment\": \"Dear Reviewer MgEu,\\n\\nThank you for acknowledging that your concerns have been largely addressed. We sincerely appreciate your engagement and thoughtful feedback throughout the discussion. \\n\\nRegarding your statement, *\\u201cNevertheless, after reading the comments from other reviewers, I think it's hard to increase my original score, so I decided to maintain it,\\u201d* we kindly ask for **clarification on how other reviewers' comments led to this conclusion**. We believe the score should primarily reflect your independent evaluation of the work, considering how well the concerns have been addressed and the broader contributions of the study. \\n\\nIn particular, we encourage you to revisit the highlights of the broader impact and contributions of our work, as outlined in our latest comments. These include: \\n- Advancing the integration of physical priors (e.g., piece-wise smoothness) into neural networks, bridging classical image processing with modern architectures. \\n- Demonstrating strong performance on challenging datasets using a simple yet effective framework, providing a robust and accessible baseline for future work. \\n- Preliminary results that extend the method to tasks like keypoint-based lung motion estimation and segmentation, illustrating its versatility beyond registration. \\n\\nWhile the **discussion period is ending soon**, we value further dialogue to ensure that the contributions of this work are fairly assessed **in a professional, transparent, and independent manner**. We hope that this engagement reflects not only on the potential score but also on the broader impact this work may have on the image registration community and fundamental neural network development. \\n\\nThank you again for your time and thoughtful input, and we look forward to any further comments you may have. \\n\\nSincerely, \\nThe Authors\"}", "{\"title\": \"Part 2: Addressing [F] and [H]\", \"comment\": \"[F] Reply: Thank you for pointing this out. The choice of baselines differs due to the following considerations:\\n1. **Iterative methods (ANTs, Demons, Bspline):** These were included for the cardiac dataset but excluded for the abdomen dataset because our parameter tuning failed to yield reasonable performance on the latter. \\n2. **MemWarp and TextSCF:** MemWarp is included for the cardiac dataset as it was originally developed for this type, whereas TextSCF is included for the abdomen dataset for similar reasons. \\n3. **ConvexAdam and SAMConvex:** These methods are included for the abdomen dataset due to its characteristic large deformations. While they may also perform well on cardiac data, their primary design focuses on datasets with large deformations, making the baselines fair for the comparisons in this paper.\\n4. **DeBG:** As described in the original paper (lines 113\\u2013121 and 369\\u2013377), this method uses shuffled channels instead of real splatting to represent range dimensions, inaccurately preserving the image manifold in higher-dimensional space. As a result, this shuffling cannot handle the image pyramid-based approach used in the abdomen dataset, where each pyramid level shares the same feature map and cost volume across different scales. \\n\\n[H] Reply: Thank you for your question and the reference. We believe this question is crucial to the image registration community. The claim here is solely based on the results in Table 1, where learning-based methods generally outperform traditional iterative methods (ANTs, Demons, Bspline) in registration accuracy for the cardiac dataset used in this paper, though with slightly higher SDlogJ. \\nHowever, this trend may not hold universally. The relative advantages of learning-based methods over iterative methods in purely unsupervised settings (without label supervision) remain an active research topic. Iterative methods, particularly discrete optimization-based ones, often achieve comparable accuracy to unsupervised learning methods on datasets with low-complexity deformations and higher accuracy on datasets with large deformations. This discrepancy stems from two main factors:\\n1. **Dissimilarity function:** Unsupervised learning methods share the same dissimilarity function as iterative methods, and amortized optimization provides no additional advantage for label matching performance in low-complexity scenarios. [2] \\n2. **Regularization:** In unsupervised learning, the burden of smoothness regularization is enforced entirely on the network weights, whereas iterative methods directly compute gradients backflow on the deformation field. This indirect optimization in unsupervised learning can lead to inferior performance compared to iterative methods, particularly on datasets with large deformations. Although some works [3] integrate smoothness regularization explicitly as part of the network, they still require segmentation masks for effective implementation.\\n\\nHowever, we have observed two cases where iterative methods may fall behind learning-based methods in image registration:\\n1. **Unsupervised settings with large-scale datasets:** In large-scale datasets with low-complexity deformations (e.g., the 4000-subject LUMIR dataset [4] from the Learn2Reg Challenge 2024 [5]), from our empirical experience, iterative methods, whether continuous, discrete, or instance optimization with neural networks, consistently underperform compared to pure learning-based methods, despite extensive parameter tuning. We attribute this gap to the regularization term. With sufficient data, while the dissimilarity metric offers no clear advantage for amortized optimization [2], the regularization term leverages the neural network\\u2019s ability to incorporate contextual information [3], enabling superior flow propagation and smoothness regularization.\\n2. **Semi-supervised settings with label supervision:** When label supervision (e.g., segmentation or keypoints) is introduced, learning-based methods outperform iterative methods by effectively utilizing surrounding contextual information from labels, resulting in higher registration accuracy. However, additional losses like segmentation loss may cause less smooth deformations at object boundaries, leading to implausible fields and higher SDlogJ values.\\n\\n[1] Hansen, L. and Heinrich, M.P., 2021. Revisiting iterative highly efficient optimization schemes in medical image registration. MICCAI 2021.\\n\\n[2] Jena, R., Sethi, D., Chaudhari, P. and Gee, J.C., 2024. Deep Learning in Medical Image Registration: Magic or Mirage?. arXiv preprint 2024.\\n\\n[3] Heinrich, M.P., 2019. Closing the gap between deep and conventional image registration using probabilistic dense displacement networks. MICCAI 2019\\n\\n[4] Liu, Y., Chen, J., Wei, S., Carass, A. and Prince, J., 2024. On finite difference jacobian computation in deformable image registration. IJCV 2024.\\n\\n[5] https://learn2reg.grand-challenge.org\"}", "{\"summary\": \"This paper leverages prior knowledge observed in medical images to introduce the Piece-wise Smooth (P-S) Assumption as a basis for addressing medical image registration tasks. Specifically, the authors propose AdaWarp, a warping method that utilizes learnable adaptive filtering to register medical scans in line with the P-S assumption. By employing a low-resolution latent representation along with a differentiable bilateral grid, the method achieves a better balance between accuracy and efficiency. Experiments conducted on two registration datasets validate the effectiveness of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The motivation behind this paper is reasonable. By analyzing daily CT and MRI scans in the cardiac and abdominal regions, the authors observed two consistent patterns across certain subjects, leading to the formulation of the Piece-wise Smooth (P-S) Assumption. This assumption leverages physical priors from observed medical image patterns, which is both innovative and plausible, enhancing neural network-based registration tasks by grounding them in realistic assumptions about medical image structures.\\n2. The paper provides thorough comparative experiments. The authors test AdaWarp on two registration datasets spanning different modalities and input constraints, which demonstrates robustness and broad applicability.\", \"weaknesses\": \"1. The novelty of this paper does not seem particularly strong. While the method leverages an encoder to extract a latent representation that approximates the deformation field at a low resolution, this approach mainly contributes to the model's efficiency but is not unique. The use of latent feature representations for similar tasks has already become common in the field.\\n2. The core of AdaWarp is a differentiable bilateral grid, which naturally incorporates the P-S prior. In implementation, the guidance map aids in processes like splatting, blurring, and slicing. This incremental modification lacks sufficient novelty.\", \"questions\": \"See the above strengths and weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N.A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer RMNZ,\\n\\nThank you for your prompt response. We now have a clearer understanding of the source of confusion and believe the following clarification will help resolve it. Before getting into specific responses, we would like to establish some key points:\\n\\n1. **Definition of Terms**: \\n\\nWe believe that the terms \\\"local discontinuities\\\" or \\\"sharp boundaries\\\" refer to features in the **displacement field** rather than in the warped images. If you have a different interpretation, please let us know.\\n\\n2. **Illustration of Displacement Field Discontinuities**: \\n\\nPlease refer to [this figure](https://ibb.co/7J8hQrf), which demonstrates the desired behaviors of a discontinuous displacement field:\\n - **(a) Local Homogeneity**: Smooth displacement vectors within an organ, representing the expected uniform motion.\\n - **(b) Varying Magnitudes in Similar Directions**: Displacement vectors with different magnitudes but similar directions, illustrating soft tissue moving against rigid structures.\\n - **(c) Sliding Boundary Conditions**: Displacement vectors on opposite sides of a boundary moving in opposite directions, depicting sliding motions between adjacent organs or between organs and the background.\\n\\n3. **Piece-wise Smoothness vs. Sharp Boundaries**: \\n\\nThe ability to preserve discontinuities does not necessarily imply the presence of sharp boundaries in the produced deformation field. Most deformations in the human body are elastic, meaning they are generally continuously differentiable and invertible. Even for adjacent organs with sharp boundaries in terms of image intensity, the displacement field may remain smooth and continuous. **In essence, clear and well-defined boundaries in image intensities do not inherently result in displacement discontinuities; only motions between these objects and the background can lead to displacement discontinuities.**\"}" ] }
0WqAnYWi7H
Mitigating Distribution Shifts: Uncertainty-Aware Offline-to-Online Reinforcement Learning
[ "Mohamad H. Danesh", "Maxime Wabartha", "Joelle Pineau", "Hsiu-Chin Lin" ]
Deploying reinforcement learning (RL) policies in real-world scenarios, particularly through offline learning approaches, faces challenges due to distribution shifts from training environments. Past approaches have shown limitations such as poor generalization to out-of-distribution (OOD) variations or requiring extensive retraining on target domains. We propose Uncertainty-aware Adaptive RL, UARL, a novel offline RL pipeline that enhances OOD detection and policy generalization without directly training in OOD environments. UARL frames distribution shifts as OOD problems and incorporates a new OOD detection method to quantify uncertainty. This approach enables iterative policy fine-tuning, starting with offline training on a limited state space and progressively expanding to more diverse variations of the training environment through online interactions. We demonstrate the effectiveness and robustness of UARL through extensive experiments on continuous control tasks, showing reliability in OOD detection compared to existing method as well as improved performance and sample efficiency.
[ "Reinforcement learning", "Out-of-distribution detection", "Uncertainty estimation", "Offline RL" ]
Reject
https://openreview.net/pdf?id=0WqAnYWi7H
https://openreview.net/forum?id=0WqAnYWi7H
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xuQTtUHTTa", "xTqrwARmpm", "xPY46KVIOL", "uiEMk9q1JB", "se2ZFSnVFe", "qFa5HjFuQT", "q1hAFxb3PK", "kVHDcw8kla", "jmtibPBokc", "jFEYha0oHA", "hlxHUA8Xjo", "fVldHvIpmO", "SpAEPTbZSC", "M0COUA0GKD", "KyX0gUFjrY", "KxR9Wa5d0B", "JiBTXzp1H8", "HnVHlqn6EE", "HeP0XgLDwg", "GPIh9nDXfI", "FsMQsp5KWs", "Cp6PBNyk8J", "Cab7BmY9tg", "7p5g6tFtGf", "76710xMzVZ", "0YeQ0pNFta" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732778805247, 1732399510769, 1732398970406, 1732398706368, 1737523653036, 1732764897982, 1732763979274, 1732763288888, 1730207469906, 1732400139579, 1732576672708, 1732399732481, 1732399281837, 1732614351670, 1730171994308, 1732764145221, 1732763837462, 1730636551159, 1732399242454, 1730382356354, 1732764849436, 1734880569414, 1730671670920, 1732400166285, 1732399493534, 1732399896600 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4644/Reviewer_1mvx" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Reviewer_FRBA" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Reviewer_ruzS" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Reviewer_FRBA" ], [ "ICLR.cc/2025/Conference/Submission4644/Reviewer_ruzS" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Reviewer_e1VH" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Reviewer_1mvx" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Area_Chair_J8qH" ], [ "ICLR.cc/2025/Conference/Submission4644/Reviewer_aVXx" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ], [ "ICLR.cc/2025/Conference/Submission4644/Authors" ] ], "structured_content_str": [ "{\"title\": \"Feedback to the authors\", \"comment\": \"Thank you for your feedback. After revisiting the reviews, I believe my score aligns with the feedback and will keep it. I appreciate your effort and encourage you to continue developing your ideas.\"}", "{\"title\": \"Authors' Response - Part 2\", \"comment\": \"> *Does this algorithm fall within the scope of Offline Reinforcement Learning?*\\n\\nWe believe this topic is already thoroughly addressed in the Related Work section (Lines 110-114), where we explicitly discuss the distinctions between our approach and existing Offline RL methods. However, we recognize the importance of ensuring that this contextualization is evident throughout the paper. To address this, we further emphasized these points in the abstract.\"}", "{\"title\": \"Authors' Response\", \"comment\": \"We sincerely thank you for your assessment of our work. Your comments have helped us further strengthen the clarity and presentation of our contributions. Following are our responses to your comments.\\n\\n\\n> *Repulsive locations discussion could be written more formally and thus more concisely. The current version is a bit too dense verbally.*\\n\\nThe section aims to establish a clear conceptual framework for how our approach progressively expands the exploration space through targeted environmental randomization. While we appreciate the suggestion for potential rephrasing, we believe the current format is necessary as we introduce the concept of repulsive locations in the RL context, detail our specific implementation through hyperparameter randomization, illustrate the progressive expansion of the exploration space through Fig. 2, and distinguishes our targeted approach from standard domain randomization. However, we welcome specific suggestions for improving the formality or conciseness of particular passages while maintaining these core explanatory elements. If the reviewer has specific areas where you feel the language could be tightened, we would be happy to address them in revision.\\n\\n> *I'm also a bit confused by Figure 2, while it's a nice visual, is it something derived from the experiments or is it just a conceptual illustration?*\\n\\nIt is a conceptual visualization for illustrating the idea. It did not derive from actual experiments but is meant to provide an overview of the approach. We mentioned that explicitly in the revision.\\n\\n> *Lack of related works*\\n\\nUARL differs fundamentally from curriculum RL, which relies on a structured progression of tasks, helping the agent learn incrementally by tackling simpler tasks before harder ones. In curriculum RL, the policy is refined on new tasks as they are introduced. However, this sequential approach may fall short in real-world scenarios where unexpected, OOD events can disrupt performance. In contrast, UARL dynamically detects and adapts to OOD shifts based on real-time uncertainty rather than a fixed curriculum. Notably, UARL assumes that policies cannot be refined in the target environment. By quantifying and addressing uncertainty, our approach improves safety and robustness in unpredictable environments, making it better suited for real-world deployment where adaptability is key. To highlight these differences further, we will provide a discussion in the Related Work section.\\n\\n> *How do you define \\\"progressively expanding the randomization range\\\" for different environment parameters?*\\n\\nIn UARL, \\\"progressively expanding the randomization range\\\" does not involve a simple, uniform increase in each parameter. Instead, we dynamically adjust the randomization based on environmental uncertainty, without requiring expert knowledge for each parameter. Specifically, the expanded dataset serves as a repulsive dataset, introducing uncertainty to help detect OOD cases. During the verification process on the real-world environment ($\\\\mathcal{D}_w$), two outcomes guide us: if low uncertainty is detected, it suggests that the dataset adequately captures the real-world dynamics, meaning the parameter range is reasonable. If high variance remains, it indicates that the dataset is still \\\"too narrow,\\\" signaling that the randomization range should be further expanded to better encompass real-world variability. This iterative process ensures that we avoid training in environments that are too far from the actual dynamics, stopping before the agent encounters destabilizing conditions.\"}", "{\"title\": \"Message to all reviewers\", \"comment\": \"We want to thank all five reviewers for taking the time to provide thoughtful and detailed feedback on our paper. Your comments have been incredibly helpful in improving the quality of our work, and we truly appreciate your effort and expertise.\\nThe revision will be uploaded in 24 hours. In the revised version of the paper, we have worked to address many of the concerns and suggestions you raised, with changes written in red. We hope you will find these updates helpful as you review the changes.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Authors' Comment\", \"comment\": \"As the discussion period draws to a close, we wanted to follow up to check if you have had a chance to review our rebuttal. If you have any remaining questions or concerns, we would be glad to address them. Thank you.\"}", "{\"title\": \"Authors' Response - After Rebuttal - Part 2\", \"comment\": \"> *Related work and novelty*\\n\\nDomain randomization lacks clear guidelines on what parameters to randomize and to what extent. For instance, the true value of friction is not directly measurable as it depends on complex dynamic interactions between the robot's materials and the surface it operates on. Similarly, the mass of each robot link, while indirectly measurable, significantly influences overall dynamics. In practice, researchers make assumptions about these parameters, train policies in simulation, and validate them on real systems. However, these assumptions are often flawed\\u2013either the randomization range is too narrow or too broad. A limited range can result in unstable policies when deployed on the robot, while an overly broad range may train the policy on scenarios with no real-world relevance, wasting computation and time. This iterative trial-and-error process can take days or even weeks to identify an effective randomization strategy.\\n\\nFor this reason, our goal is to develop criteria to skip the online validation step, so we can make sure we do not execute a potentially problematic policy on a real system.\\n\\nRegarding the novelty of the diversity term, we acknowledge that it draws inspiration from existing methods in the supervised learning literature. However, our contribution lies in adapting this concept to the RL setting, where enforcing diversity in the output space, rather than the parameter space, presents unique challenges and implications. By validating that the insights from the original DENN paper are both applicable and effective in the UARL context, we demonstrate the value and utility of this adaptation. This extension is non-trivial, as it addresses the specific dynamics of RL environments and shows that these insights can significantly enhance OOD detection and policy robustness in our framework. While we acknowledge and appreciate the reviewer's perspective, we respectfully disagree for the reasons outlined above.\\n\\n> *Lacking baselines*\\n\\nAs highlighted in our earlier responses and addressing the reviewer\\u2019s specific concern about off-dynamics RL, our method is fundamentally distinct from both off-dynamics approaches and RLPD. Specifically focusing on RLPD, this method is primarily centered around **online learning** while leveraging offline data to enhance the process. In contrast, the core principle of UARL lies in its complete avoidance of any learning that utilizes data from the target domain. This distinction is crucial and underscores why a direct comparison is unfair. \\n\\nTo clarify further, please refer to Algorithm 1, lines 12\\u201314 of the RLPD paper, where half of the replay buffer is explicitly filled with samples from online interactions with the target domain. By comparison, our approach assumes access only to a limited set of datapoints from the target domain, without any direct online interactions. In this regard, RLPD aligns more closely with off-dynamics RL methods such as VGDF, PAR, and H2O, as discussed earlier. By eliminating the need for learning with target domain data, UARL avoids these challenges when deploying an RL policy to the real-world.\\n\\n> *On the performance of UARL*\\n\\nThank you for your comment. As explained in our earlier comments, the main focus of our work is improving OOD detection for RL agents, not enhancing their performance. We stated this point clearly in the revision (line 75), our key contribution is \\u201ca method for quantifying uncertainty and adapting policy without direct interactions in the OOD environments.\\u201d Performance improvement is not the primary objective of this work. Instead, we present performance results to demonstrate that our method does not compromise the underlying policy\\u2019s effectiveness and, in fact, achieves competitive performance with the addition of diversity term. Our primary goal remains OOD detection, as demonstrated in Figure 4, which highlights the clear separation between ID and OOD scenarios achieved by UARL.\"}", "{\"title\": \"Authors' Response - After Rebuttal\", \"comment\": \"We sincerely thank the reviewer for their fair assessment of our work, and acknowledging our efforts in addressing their concerns and improving the paper.\\n\\n> *I would strongly encourage you to include more detailed ablation studies that rigorously examine the impact of different hyperparameter settings on the performance. Such an analysis would not only enhance the robustness of your findings but also contribute to the overall credibility of your work.*\\n\\nThank you for your suggestion. In response, we have added a detailed hyperparameter sensitivity analysis in Appendix B.3, examining the effects of the diversity coefficient and diversity scale across a broad range of values. To enhance clarity, we separated the original plot in the paper by baseline algorithms (AWAC, CQL, and TD3-BC) and hyperparameter $\\\\delta$. This analysis, based on 450 experimental runs, provides valuable insights into the balance between the RL objective and the diversity loss in Equation (5).\\n\\nSpecifically, the choice of the diversity coefficient $\\\\lambda$ is not arbitrary, as it governs the trade-off between these objectives. When the diversity loss dominates, the agent may prioritize generating diverse behaviors at the expense of achieving the task. Conversely, with smaller $\\\\lambda$ values, the diversity loss becomes negligible, leaving the RL objective largely unaffected. Our findings reveal that moderate $\\\\lambda$ values generally yield the best performance, while extreme values degrade it by skewing this balance.\\n\\nThese results, visualized with two new figures in Appendix B.3, confirm the robustness of our default hyperparameter configuration and underscore the potential for further optimization through careful fine-tuning in specific scenarios. This enhanced analysis not only validates our findings but also reinforces the credibility and generalizability of our approach.\\n\\n> *With respect to the adaptive range adjustment method you referenced, I kindly request a more comprehensive explanation of its implementation. A more explicit and detailed description would significantly improve the understanding of its mechanics and effectiveness, providing greater clarity for readers.*\\n\\nTypically, determining the extent of domain randomization is challenging because the true characteristics of the real world are unknown. A common approach is to make educated guesses about these values, train a policy in simulation, and then test it in the real world. If the policy underperforms, one would return to the simulation and increase the degree of randomization.\\n\\nIn our case, we aim to eliminate the need for real-world validation. An alternative would be to arbitrarily select a randomization range, train within that range, and evaluate the policy using our method. However, this approach of \\\"blind\\\" randomization often results in extensive evaluations until the policy aligns with the target distribution. Instead, we adopted adaptive range adjustment because (1) it provides a structured way to determine the randomization range, and (2) it progressively increases the environment's complexity, allowing the policy to be fine-tuned in subsequent randomized environments without restarting training.\"}", "{\"summary\": \"This paper deals with the distribution shift issue in reinforcement learning (RL). The authors introduce an approach called Uncertainty-aware Adaptive RL (UARL) that enhances policy generalization across diverse variations of a given environment. UARL views distribution shifts as OOD problems and integrates an OOD detection method to quantify uncertainty, i.e., Q-value variance. UARL realizes diversity in critics via the DENN method. The authors claim that UARL enables iterative policy fine-tuning, starting with offline training on a limited state space and progressively expanding to more diverse variations of the same environment through online interactions. The authors demonstrate the effectiveness of UARL through some experiments on continuous control tasks, showing improved performance and sample efficiency compared to existing methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"## Pros\", \"This paper enjoys the following advantages,\", \"This paper is well-written. The presentation of this paper is very good and of high quality. The figures are very nice and helpful. Some of the illustration figures significantly convey the idea and core design of UARL, e.g., Figure 1, Figure 2\", \"This paper is easy to read and easy to follow\", \"The authors provide open-source codes in the anonymous website, and I believe that the results reported in this paper are reproducible\"], \"weaknesses\": [\"## Cons\", \"Despite the aforementioned advantages, this paper has the following flaws.\", \"**Offline-to-online RL or off-dynamics RL?** This paper claims that they focus on the offline-to-online setting, while it seems that they actually are dealing with off-dynamics RL [1]. The offline-to-online RL typically refers to training a policy offline from a static dataset and then fine-tuning it with some extra online interactions in the same environment. Instead, the authors conduct experiments by modifying the environmental parameters, which essentially constructs dynamics shifts. The experimental setting resembles that in off-dynamics RL [2,3,4]. It seems unclear to me whether it is suitable to name the paper *offline-to-online* or *off-dynamics*.\", \"**Insufficient related work and Limited novelty.** The authors emphasize that the proposed method can enhance the safety and robustness of RL, but it includes too few related works on safe offline/offline-to-online RL and robust offline/offline-to-online RL. Meanwhile, I have doubts about the novelty of this work. The authors progressively increase hyperparameter randomization of the environment (e.g., friction) when the variance between the Q-ensemble is large and terminates until the policy is safe enough to be deployed. Such an idea resembles [5], which progressively adapts its learned policy by modifying the parameters of the environment. Furthermore, I am a bit confused about the benefits of parameter randomization against domain randomization. If the user can adjust the parameters of the environment, then why not directly use domain randomization? I would expect reasonable justifications for the design of the tasks here. Furthermore, the diversity term is not novel, it is borrowed directly from the existing literature. These all together make the contribution of this paper somewhat limited.\", \"**Lacking baseline algorithms.** As commented above, this paper claims that it addresses the offline-to-online RL but actually focuses on the off-dynamics RL setting, the authors should include the following baselines,\", \"baselines on off-dynamics RL, e.g., [2,3,4]. This is vital to show the effectiveness of the UARL in terms of policy generalization to the target domain\", \"RLPD [6], which is a method specially designed for learning with offline data and online interactions with the environment. This baseline is important since it exhibits superior performance given the experimental setting described in this paper (offline data and online interactions). Based on my experience, RLPD can achieve quite strong performance even when there exists dynamics shifts between the offline data and the online environment. Involving this baseline can justify the necessity of the components adopted in UARL (otherwise, one can directly use RLPD for deployment)\", \"baselines on safe RL and robust RL. The authors claim that UARL can enhance the safety and robustness of the executed actions, while they do not include any safe RL or robust RL methods for comparison, making it hard to see the rationality and effectiveness of UARL\", \"baselines on offline-to-online RL. Unfortunately, this paper also does not include offline-to-online RL methods as valid baseline methods. It is hard to tell the true effectiveness of UARL without these methods, e.g., [7,8,9]\", \"(minor) **Lacking theoretical justifications.** There is no theoretical analysis of the UARL. I do not want to blame the authors too much on this point. I understand that this paper may set the focus mainly on the empirical side, but including some theoretical analysis can strengthen this paper.\", \"(minor) **Other issues.**\", \"in Equation 5, you wrote $R(s,a)$ in the bellman error, while $r$ in the diversity term $\\\\mathcal{L}_{div}^{RL}$. I think they should be identical, right?\", \"the authors do not discuss the limitations of their method in the main text or the appendix. It is important to acknowledge both the advantages and the limitations of the proposed method.\", \"the performance improvement of UARL seems limited and incremental on some tasks (e.g., see Figure 3)\", \"UARL can still suffer from performance degradation during the fine-tuning phase (e.g., see Figure 5)\", \"Given the above concerns, I vote for rejection since I believe that this paper needs a significant revision before being accepted for possible publication.\", \"[1] Off-dynamics reinforcement learning: Training for transfer with domain classifiers. ICLR\", \"[2] When to trust your simulator: Dynamics-aware hybrid offline-and-online reinforcement learning. NeurIPS\", \"[3] Cross-domain policy adaptation via value-guided data filtering. NeurIPS\", \"[4] Cross-domain policy adaptation by capturing representation mismatch. ICML\", \"[5] Revolver: Continuous evolutionary models for robot-to-robot policy transfer. ICML\", \"[6] Efficient online reinforcement learning with offline data. ICML\", \"[7] Offline-to-online reinforcement learning via balanced replay and pessimistic q-ensemble. CoRL\", \"[8] Bayesian Design Principles for Offline-to-Online Reinforcement Learning. ICML\", \"[9] Proto: Iterative policy regularized offline-to-online reinforcement learning. Arxiv\"], \"questions\": [\"Please c.f. the comments above. Besides, I have the following questions,\", \"In Lines 184-185, you wrote, *disagreement among ensemble models, particularly at the boundary of the training distribution*, what do you exactly mean by *at the boundary of the training distribution*?\", \"what are the advantages of the diversity term in Equation 5 compared to other diversity terms? (e.g., the diversity term used in the EDAC paper) The authors ought to justify the advantages of doing so rather than using some other methods.\", \"how can the authors tell that the uncertainty measurement provided in this paper is valid? It would be better to compare against some other uncertainty estimation methods and visualize the uncertainty measurement for a better comparison\", \"do you have any parameter study on the threshold parameter in Algorithm 2? How it can affect the performance of the agent? Do we need to tune this hyperparameter per task? How can we ensure that the policy is safe when $V_Q \\\\le {\\\\rm threshold}$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' Response - Part 1\", \"comment\": \"Thank you for your thorough and thoughtful review of our submission. We greatly appreciate the time and effort you put into providing detailed feedback and suggestions to enhance our work.\\n\\n> *It should be further shown, either theoretically or experimentally, why the diversity term $\\\\mathcal{L}^{\\\\text{RL}}_{\\\\text{div}} $ in Eq. 5 allows the ensemble Q network to learn the value of diversity.*\\n\\nIndeed, in the worst case, it would be possible for all members of the ensemble to converge to the same value Q\\u2019(s,a) which would differ from r + Q(s,a), and the loss would be satisfied. However, because Q is high dimensional and each member of the ensemble was initialized independently, this case is extremely unlikely and does not happen in practice. Note that this was already true in DENN. Therefore, we do in practice obtain a high diversity among the members of the ensemble. You can verify this directly in Fig. 4: the critic variances for UARL are all >> 1 OOD and are much larger than the critic variances ID, indicating that in practice all the members of the ensemble differ. Note that if they did not differ, the critic variance would be closer to 0, and if UARL was not useful, the critic variance OOD would be much closer to the critic variance ID, as is the case for AWAC (first row of Fig 4). Finally, this result remains consistent as we change the definition of ID and OOD (columns 2 and 3).\\n\\n\\n> *Further clarification is needed for how to calculate $V_Q$ in Algorithm 2 and how to calculate critical variance in uncertainty estimation experiments*\\n\\n$V_Q$ is calculated simply as the variance of the outputs from the ensemble of critics. For any given state-action pair $(s, a)$, each critic in the ensemble $(Q_1, ..., Q_N)$ will output an estimate of the Q-value, represented as $Q_1(s, a), ..., Q_N(s, a)$. The variance ($V_Q$) is then computed from these Q-value estimates, reflecting the disagreement or uncertainty among the critics for that specific state-action pair. To calculate the variance, we just follow the variance calculation equation:\\n$V_Q = \\\\frac{1}{N-1} \\\\sum_{i=1}^{N} (Q_i(s, a) - \\\\bar{Q}(s, a))^2$\\nwhere $\\\\bar{Q}(s, a)$ is the average of the Q-values over the ensemble, $\\\\bar{Q}(s, a) = \\\\frac{1}{N} \\\\sum_{i=1}^{N} Q_i(s, a)$. Given this basic variance formula, the calculation is straightforward and well-established in statistics.\\n\\n> *The ablation experiments in Appendix B.3 are not detailed enough*\\n\\nWe understand the concern about the readability of the curves and appreciate the suggestion to differentiate the parameter combinations more clearly. Our primary goal with the plots was to convey the effect of different hyperparameter settings on the algorithm's performance without performing extensive hyperparameter tuning. To avoid clutter, we chose to represent all curves with the same color and made the line corresponding to the chosen hyperparameters slightly darker to make it distinguishable. Given the large number of curves (15 in total) with standard deviation as shaded areas, introducing additional colors would likely make the plot overwhelming and harder to interpret.\\n\\n> *For each $E_i$, the randomized environmental hyperparameter range is determined without a common metric but as a hyperparameter, which may require a lot of time for online tuning for complex scenarios?*\\n\\nIn UARL, the randomization ranges for environmental hyperparameters are not manually tuned or determined via trial and error. Instead, we rely on a dynamic and adaptive approach where the range of each parameter is adjusted based on the agent\\u2019s uncertainty in the environment. This process allows the system to automatically adjust the randomization ranges without requiring extensive online tuning. In the paper, we ensure that the parameters are varied in a way that introduces meaningful uncertainty for OOD detection while avoiding excessive complexity or destabilization of the agent. Therefore, while the approach may seem complex, it does not require significant manual intervention or exhaustive online tuning.\\n\\n> *According to Eq. 5, $R(s, a)$ is not sampled from the dataset $\\\\mathcal{D}$, but the general Q-function update for offline RL, as in Eq. 1, is to use the sampled $r$. Is this a mistake here?*\\n\\nThanks for bringing this to our attention. It is indeed a mistake. We fixed it in the revision.\"}", "{\"title\": \"Response after rebuttal\", \"comment\": \"Thank you for your thorough and thoughtful response to my comments, as well as for the additional experiments and analysis you have provided. My concerns have been thoroughly addressed, and I have raised the score accordingly. However, I would like to offer a few additional points for your consideration:\\nWhile I recognize and appreciate your intent to keep the figures simple, I would strongly encourage you to include more detailed ablation studies that rigorously examine the impact of different hyperparameter settings on the performance. Such an analysis would not only enhance the robustness of your findings but also contribute to the overall credibility of your work.\\nWith respect to the adaptive range adjustment method you referenced, I kindly request a more comprehensive explanation of its implementation. A more explicit and detailed description would significantly improve the understanding of its mechanics and effectiveness, providing greater clarity for readers.\\nGiven the improvements made, I am confident that these additional refinements will further strengthen the quality of the work.\"}", "{\"title\": \"Authors' Response - Part 1\", \"comment\": \"We sincerely thank the reviewer for their insightful and detailed assessment of our work. We acknowledge that our initial submission did not effectively convey our ideas, and we would like to provide clarification here. Our primary goal is to develop reliable OOD detection capabilities without direct interaction with the target domain. Policy robustness is not the main focus of our work.\\n\\n> *Offline-to-online RL or off-dynamics RL?*\\n\\nWe thank the reviewer for highlighting this aspect of domain randomization. We acknowledge that it was not clearly explained in our initial submission. Indeed, off-dynamics is a better description of our work. \\n\\nThe prior work [1-5] assumes that the target domain is accessible and relies on refining the policy on the target domain. However, this presents significant safety risks when applied to complex, real-world systems. For example, refining a policy for a self-driving car involves deploying a potentially suboptimal policy to a real system, where the stakes of failure or \\\"termination\\\" are exceptionally high.\\n\\nIn contrast, our work does not rely on refining the policy by interacting with the target domain. Instead, we use an ensemble of critiques as a proxy to evaluate whether the policy is \\u201cin-distribution\\u201d of the target domain.\\n\\n> *Furthermore, I am a bit confused about the benefits of parameter randomization against domain randomization*\\n\\nIn conventional domain randomization, there is often no clear guidance on which parameters to randomize or the extent of randomization needed. The process tends to be somewhat arbitrary and relies on iterative validation, comparing the performance of a randomized policy to real-world scenarios. However, this trial-and-error approach also presents significant safety risks when applied to complex, real-world systems, similar to our reasoning above.\\n\\nOur research seeks to improve domain randomization by introducing a clear evaluation criterion to assess whether a policy is ready for real-world deployment without directly deploying it. We approach this as an out-of-distribution detection problem. We continuously randomize the parameters until the policy is \\u201cin-distribution\\u201d of the real-world data. By utilizing an ensemble of critics, we evaluate whether the randomization effectively encompasses the true dynamics of the environment. Our concept of safety is based on the fact that we could avoid real-time interactions with the environment to refine the policy.\\n\\nOur algorithm indeed builds upon the general form of the diversity term introduced in DENN. However, we note that it adapts it to the RL setting by identifying the correct function to repulse from (in our case, $r + \\\\gamma Q(s\\u2019, a\\u2019)$). Moreover, we extend the original formulation of DENN as we do not require to train in a first phase a \\u201creference function\\u201d, making training less cumbersome. \\n\\n> *Lacking baseline algorithms.*\\n\\nThank you for this comprehensive list of suggested baselines. We appreciate the thoroughness of your review and would like to clarify several important distinctions between our work and the suggested comparisons:\\n- Fundamental Objective Difference: Our primary goal is to develop reliable OOD detection capabilities, not just policy robustness. While the suggested baselines (H2O, VGDF, PAR) focus on making policies robust to distribution shifts, our work aims to explicitly identify when the system encounters novel situations that require intervention without direct interaction in the novel environment. This is a crucial safety feature for real-world deployment.\\n- Real-world Data Usage: Several suggested baselines (H2O, VGDF, Off2On, PROTO) require real-world data during training or policy refinement. Our method deliberately avoids this requirement for safety reasons, using real-world data only for evaluation. This design choice makes our approach more practical for safety-critical applications where real-world training data may be scarce or risky to collect.\\n- Complementary Rather Than Competitive: Our method is complementary to many of these baselines rather than directly competitive. While RLPD and other offline-to-online methods focus on policy improvement, our work addresses the crucial preceding question: when is it safe to deploy a policy in the first place? This detection capability could actually enhance the safety of these existing methods.\\n\\nThat said, we acknowledge that adding some robust RL baselines could help demonstrate the indirect benefits of our approach to policy robustness. We will expand our evaluations to include appropriate baselines that do not require real-world training data, focusing on comparing OOD detection capabilities where possible.\\n\\n> *Lacking theoretical justifications*\\n\\nWhile we agree that such analysis could strengthen the paper, our primary focus here is on the empirical evaluation of UARL. We believe the experiments provide strong evidence of its effectiveness, and we will consider theoretical analysis in future work.\"}", "{\"title\": \"Authors' Response - Part 2\", \"comment\": \"> *the convergence property of the proposed algorithm should be discussed, especially line 8 in Algorithm 2 - what if this condition is never voilated?*\\n\\nIf the condition in line 8 of Algorithm 2 is never violated, it likely means that the domain randomization is ineffective. High uncertainty ($V_Q$) despite diverse training environments suggests that the agent has not learned a policy that generalizes to real-world conditions. This indicates that either the domain randomization strategies are inadequate or the real-world data ($D_w$) is too different from the simulated environments. In such cases, continuing training would not be productive. Instead, it would require revisiting the domain randomization process and real-world data to ensure they are sufficiently representative of the target environment. It is important to note that this is not a shortcoming of our method, but rather a reflection of the limitations of the domain randomization process or the quality of the real-world data. Our approach assumes that these factors are appropriately addressed, and failure in this regard would require improvements outside the scope of our method. We highlighted these points in the revision at the end of Section 4.4.\"}", "{\"title\": \"Post-Rebuttal Comments\", \"comment\": \"Sorry for the late response. I thank the authors for providing a rebuttal and revising their manuscript (e.g., including the limitation part). Please find the comments below.\\n\\n> Offline-to-online RL or off-dynamics RL?\\n\\nThis paper lies actually in the category of off-dynamics RL. The authors should discuss off-dynamics RL. Prior works like [1,2,3] **do not necessarily assume that the target domain is accessible**. H2O [1] requires an offline target domain dataset and an online source domain environment. VGDF [2] and PAR [3] also conduct experiments when the target domain is fully offline. The authors wrote that these methods can *present significant safety risks when applied to complex, real-world systems*. I disagree with that. [1,2,3] all introduce conservative terms to ensure that the learned policy stays close to the support region of the target domain dataset. These can ensure the safety of the learned policy to some extent. A comparison is needed to see whether UARL can outperform these off-dynamics RL methods. **Please note that I am not requiring extra experiments here.**\\n\\n[1] When to trust your simulator: Dynamics-aware hybrid offline-and-online reinforcement learning. NeurIPS\\n\\n[2] Cross-domain policy adaptation via value-guided data filtering. NeurIPS\\n\\n[3] Cross-domain policy adaptation by capturing representation mismatch. ICML\\n\\n> Related work and novelty\\n\\nI hold my opinion that the related work is insufficient and the novelty of this paper is somewhat weak. The authors should cite more recent offline-to-online RL/robust RL/safe RL/off-dynamics RL papers. The authors wrote that their primary goal is *to develop reliable OOD detection capabilities without direct interaction with the target domain* and policy robustness is not the main focus of their work. I also disagree with this, because the authors emphasize policy safety and robustness numerous times in their paper. I appreciate that the authors include some results comparing UARL against ensemble-based offline RL methods like PBRL and RORL, additional results against safe offline RL ought to be included. I reiterate that the diversity term is not novel, and is borrowed directly from the existing literature. The modifications are minor as listed by the authors.\\n\\nBased on the rebuttal, it seems that the benefit of parameter randomization introduced in this paper against domain randomization is the introduction of a validation criterion. This does not seem to be a clear advantage to me since domain randomization is initially designed to create a variety of simulated environments with randomized properties and train a model that works across all of them, such that the true target environment can be covered. I believe *no clear guidance on which parameters to randomize or the extent of randomization needed* is not an issue or major flaw of domain randomization.\\n\\n> Lacking baselines\\n\\nI hold my opinion that the baseline methods are insufficient. The authors should at least include a comparison against off-dynamics RL methods and RLPD (which I believe is a very important baseline). Comparing against other methods like safe RL algorithms is optional but encouraged. The authors wrote that off-dynamics RL methods are not suitable for comparison because they *focus on making policies robust to distribution shifts*, but I think UARL also does so. The authors also claimed that their method is complementary to baselines rather than competitive. This can be a valid point but is not a good reason to involve too few baseline methods (e.g., not simply comparing X against X+UARL, but X against X+UARL and against X+others and against Y for some algorithms X, Y).\\n\\n> On the performance of UARL\\n\\nI hold my opinion that the performance improvement of UARL seems limited and incremental on some tasks and that UARL can still suffer from performance degradation during the fine-tuning phase (despite the balanced replay buffer mechanism). The limited performance improvement indicates that using UARL does not seem necessary for some tasks, which can be a negative signal for UARL.\\n\\n> What are the advantages of the diversity term in Equation 5 compared to other diversity terms?\\n\\nThe authors do not seem to answer my question. I am asking about the advantages of the diversity term in Equation 5 rather than its difference against other methods. Why should we prefer Equation 5 rather than other diversity terms like that used in EDAC?\\n\\n> On the uncertainty measurement\\n\\nIf the uncertainty estimate is inaccurate, how can we tell that the OOD detection is reliable? This is extremely vital for UARL to distinguish OOD samples. I hence reiterate that it would be better to compare against some other uncertainty estimation methods and visualize the uncertainty measurement for a better comparison.\\n\\nOverall, I confirm my initial rating and do not favor acceptance at the current stage. It is my hope that the authors can find some of my review and comments helpful in improving the manuscript.\"}", "{\"summary\": \"This paper presents a novel approach aimed at solving the OOD problem faced by deploying reinforcement learning strategies in real-world scenarios when the distribution of training environments is shifted. The proposed approach tackles this issue by adopting a new diversity ensemble Q-network approach to OOD detection. Furthermore, the method incorporate an iterative policy fine-tuning method that starts with offline training in the original environment and gradually scales up to more stochastic environments through online interactions. Experimental results show that this approach outperforms the Baseline algorithm in Mujoco environments with randomized environmental hyperparameter and typically requires fewer samples to converge.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.This paper propose a novel OOD detection method and an iterative online policy fine-tuning training framework.\\n2.Good experimental results are obtained on Mujoco environments with randomized environmental hyperparameter, verifying the validity of the method.\\n3.The writing is good.\", \"weaknesses\": \"1. It should be further shown, either theoretically or experimentally, why the diversity term $\\\\mathcal{L}^{\\\\text{RL}}_{\\\\text{div}} $ in Eq. 5 allows the ensemble Q network to learn the value of diversity. Intuitively, minimising $\\\\text{exp}(-\\\\Vert Q_i(s, a) - (r + Q_i(s^\\\\prime, a^\\\\prime)) \\\\Vert^2/2\\\\delta^2)$ will allow the Q network to not converge quickly to a certain value on the repulsive dataset, but it does not guarantee that the ensemble Q network learns diverse values.\\n2. Further clarification is needed for how to calculate $V_Q$ in Algorithm 2 and how to calculate critical variance in uncertainty estimation experiments.\\n3. The ablation experiments in Appendix B.3 are not detailed enough. The training curves for different parameter combinations should be differentiated to illustrate the algorithm's parameter sensitivity to $\\\\lambda$ and $\\\\delta$ during training.\\n4. For each $E_i$, the randomized environmental hyperparameter range is determined without a common metric but as a hyperparameter, which may require a lot of time for online tuning for complex scenarios.\", \"questions\": \"1. According to Eq. 5, $R(s, a)$ is not sampled from the dataset $\\\\mathcal{D}$, but the general Q-function update for offline RL, as in Eq. 1, is to use the sampled $r$. Is this a mistake here?\\n2. Is there a performance or computational advantage of UARL over direct processing of $E_\\\\omega$'s using the Robust RL algorithm or the algorithm with domain randomization techniques? Can this be illustrated experimentally?\\n3. Notice that in offline training (1st iteration), EDAC performs much worse than CQL and TD3+BC in many environments, which doesn't seem to match the experimental results in the EDAC article?\\n4. The experiments in this paper were all performed in Mujoco, how do we obtain the real-world demonstration dataset $\\\\mathcal{D}_\\\\omega$ in the simulation environment like Mujoco?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' Response - After Rebuttal - Part 3\", \"comment\": \"> *What are the advantages of the diversity term in Equation 5 compared to other diversity terms?*\\n\\nThe diversity term in Equation 5 of UARL is designed specifically to enhance the **critic variance** in the Q-value output space, which is crucial for the OOD detection framework of our method. The critic output space is where we detect OOD events, so this learning rule is directly aligned with the intended objective of OOD detection. Unlike the diversity term in EDAC, which focuses on reducing gradient similarity across ensemble networks to improve Q-value accuracy and prevent overestimation, the UARL diversity term aims to **maximize variability in Q-values** to better separate ID and OOD data.\", \"this_distinction_is_critical\": \"EDAC's diversity term serves a **primarily conservative Q-learning goal**, penalizing OOD actions by leveraging variance, thereby avoiding overestimation errors. In contrast, UARL\\u2019s diversity term is not concerned with conservative Q-value estimation but is explicitly crafted to amplify differences in Q-values to improve **uncertainty quantification and OOD detection**. This tailored focus makes the UARL diversity term uniquely suitable for our framework, as it aligns with our primary objective of robust OOD detection rather than mitigating overestimation bias.\\n\\n> *On the uncertainty measurement*\\n\\nWe appreciate the reviewer\\u2019s concern regarding the importance of reliable uncertainty estimation for effective OOD detection. As included in the revision, we have already included comparisons with OOD-aware baselines, such as PBRL and RORL, in Appendix B.5. To further strengthen our argument, we have now added additional comparisons with EDAC and DARL [1], which are specifically focused on uncertainty estimation in Appendix B.5. These comparisons provide a more comprehensive evaluation of the effectiveness of our method in distinguishing OOD samples.\\n\\n[1] Zhang, Hongchang, Jianzhun Shao, Shuncheng He, Yuhang Jiang, and Xiangyang Ji. \\\"DARL: distance-aware uncertainty estimation for offline reinforcement learning.\\\" AAAI 2023.\"}", "{\"title\": \"Authors' Response - After Rebuttal - Part 1\", \"comment\": \"We would like to thank the reviewer for getting into a very constructive discussion. We appreciate your feedback and insights to improve our paper.\\n\\n> *This paper lies actually in the category of off-dynamics RL.*\\n\\nWe appreciate the reviewer\\u2019s insightful comments and would like to address the point regarding H2O, VGDF, and PAR. While these works do, in fact, assume accessibility to the target domain, we acknowledge their contributions to ensuring safety in off-dynamics RL settings. Below, we provide clarification based on the specific assumptions made by these methods:\\n- H2O: It fills up half of the replay buffer with target domain data which is used to train the policy, based on the authors' official implementation:\\n - https://github.com/t6-thu/H2O/blob/main/SimpleSAC/sim2real_sac_main.py#L185-L186 \\n - https://github.com/t6-thu/H2O/blob/main/SimpleSAC/sim2real_sac_main.py#L43 \\n \\n While it is assumed that the target domain data is accessible offline, the data sensitivity of RL algorithms makes it impractical to train a performant RL agent with only a limited number of data points. However, in UARL, since target domain data is used only for validation, it works well without requiring extensive use of the target domain data during training, making it more efficient in handling limited data availability.\\n- VGDF: As stated in VGDF\\u2019s paper Section 1, the proposed method assumes limited number of online **interactions** with the target domain, while UARL only requires a **limited** pre-collected dataset. For instance, from the VGDF\\u2019s Abstract: \\\"we consider the online dynamics adaptation problem, in which case the agent can access sufficient source domain data **while online interactions with the target domain are limited**.\\\" Similarly, in Section 1: \\\"In contrast to these works, we consider a more general setting called online dynamics adaptation, where the agent can access sufficient source domain data and **a limited number of online interactions with the target domain**.\\\" Section 3 and Definition 3.1 further clarify this assumption, specifying a 1:10 ratio of online target domain data to source data, amounting to $10^5$ data points **used during the training** (Appendix D.2). In contrast, UARL uses target domain data solely for validation rather than training, achieving strong performance with just 100 trajectories\\u2014orders of magnitude fewer than VGDF. This highlights UARL\\u2019s efficiency and suitability for data-limited settings.\\n- PAR: The same criticism directed at VGDF also applies to PAR. As stated in Section 1 of the PAR paper: \\\"We \\u2026 consider learning policies with sufficient source domain data (either online or offline) and **limited online interactions with the target domain**.\\\" Additionally, similar to VGDF, Section 5.1 of the PAR paper specifies a source domain to online target domain data ratio of 1:10, raising similar concerns as those in VGDF.\\n- Regarding safety risks, while methods like H2O, VGDF, and PAR incorporate conservative regularization terms to keep the learned policy within the target domain\\u2019s support region, these methods still pose safety risks in high-stakes, real-world systems. This is because they are not designed to detect when OOD shifts occur; they simply act conservatively within known domains. As highlighted in [1], this distinction is critical\\u2014\\\"Robustness\\\" focuses on creating models that are resilient to adversaries, unusual situations, and Black Swan events, while \\\"Monitoring\\\" tracks detecting malicious use, monitoring predictions, and discovering unexpected model functionality. For example, in autonomous driving, while conservative regularization may limit policy deviations, it cannot guarantee safety when the vehicle encounters unforeseen road conditions or new traffic laws. Therefore, while these conservative methods provide some safeguards, they cannot fully mitigate the risks associated with OOD scenarios in complex, high-risk environments.\\n\\nFinally, we are uncertain how such a comparison could be conducted without additional experiments, as the reviewer suggests, since H2O, VGDF, and PAR require training policy with target domain data, and our method does not. If the reviewer could provide further clarification or specific suggestions, we would be happy to take the necessary steps to address their concerns.\\n\\n[1] Hendrycks, Dan, Nicholas Carlini, John Schulman, and Jacob Steinhardt. \\\"Unsolved problems in ml safety.\\\" arXiv preprint arXiv:2109.13916 (2021).\"}", "{\"summary\": \"In this paper, a novel RL pipeline, Uncertainty-aware Adaptive RL (UARL), has been proposed to enhance policy generalization across diverse variations of a given environment. UARL frames distribution shifts as OOD issues and integrates a new OOD detection method to quantify uncertainty. This method enables iterative policy fine-tuning, beginning with offline training on a limited state space and gradually expanding to more diverse variations of the same environment through online interactions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The problem raised in this paper is important and the experiments are solid.\", \"weaknesses\": \"Weakness:\\n\\n1) The core problem in this paper, i.e., distributional shift, and many important concepts are not discussed in detail. In the introduction, the authors discuss about the theoretical shortcomings of robust RL, safe RL, and the distributional shift in offline2online RL, but they ignore the discussion about the relationships between these concepts. For example, what is the relationship between the robust RL and the distributional shift in offline setting? Furthermore, what is the difference between the distributional shift problems in offline RL and offline2online RL settings? Why the proposed method could successfully solve the problem of distributional shift? Please answer this question from a high-level view.\\n\\n2) In offline (to online) RL, the uncertainty quantifier is defined clearly as the upper bound of the error produced by the empirical Bellman operator (see [1], Eq.(4.1)). Then we may concern that whether the uncertainty defined in this paper's Eq.(5) has the relationship with the uncertainty quantifier as we have known in [1]? Does it a valid uncertainty quantifier theoretically? The author should discuss about this point.\\n\\n[1] Jin. et al., Is Pessimism Provably Efficient for Offline RL.\\n\\n3) In offline RL, there have been many uncertain-aware methods to deal with the distributional shift problem, such as [2] and [3]. In the list two works, they both penalize the OOD actions by the constructed uncertainty quantifiers. So in our view, the method in this work is not beyond the scope of these methods and lack of the sufficient discussion with the advantage over the existing uncertain-aware methods.\\n\\n[2] Bai. et al., Pessimistic bootstrapping for uncertainty-driven offline reinforcement learning.\\n[3] Sun. et al., Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning.\\n\\n4) the convergence property of the proposed algorithm should be discussed, especially line 8 in Algorithm 2 - what if this condition is never voilated?\", \"questions\": \"see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' Response - Part 1\", \"comment\": \"Thank you for your detailed review and for engaging with our work. We appreciate the time and effort you put into your thoughtful feedback. We realize there may have been some miscommunication about the problem we are addressing, and we have taken steps to clarify our goals and contributions in the revised manuscript. Your comments have helped us identify areas where clearer explanations were needed.\\n\\n> *The core problem in this paper, i.e., distributional shift, and many important concepts are not discussed in detail.*\\n\\nWe acknowledge that the relationships between distributional shift across different RL paradigms deserve a more thorough treatment. Our work primarily addresses detecting distributional shift during offline-to-online transition, which differs from purely offline RL in a crucial way: offline RL faces fixed distributional gaps between training and deployment data, while offline-to-online transition must handle dynamic shifts as the policy begins interacting with the environment. Our method addresses distributional shift through progressive environmental randomization that systematically expands the policy's exposure to different dynamics. Unlike robust RL which typically optimizes for worst-case performance across a fixed distribution, we actively shape the distribution of experiences to build reliable uncertainty estimates. This helps the policy identify when it's encountering novel scenarios and adapt appropriately. We will revise the introduction to clarify these conceptual relationships and better motivate how our approach bridges the gap between offline training and online adaptation. \\n\\n> *In offline (to online) RL, the uncertainty quantifier is defined clearly as the upper bound of the error produced by the empirical Bellman operator (see [1], Eq.(4.1)).*\\n\\nIn both papers, the goal of quantifying uncertainty is to identify states and actions where the learned policy might not be reliable due to limited data or distributional shifts. While Jin et al.\\u2019s uncertainty quantifier provides a theoretical upper bound on Bellman operator errors, our variance-based metric (Equation 5) offers a practical and adaptive approach for estimating uncertainty, particularly effective in offline-to-online RL. Both methods aim to identify unreliable state-action regions, but our ensemble variance highlights critic disagreement, serving as a scalable proxy for uncertainty under distributional shifts. Though not grounded in formal bounds like Jin et al., our method has demonstrated robust empirical performance. Future work could further investigate its theoretical properties, potentially bridging the gap between heuristic and formal uncertainty quantification.\\n\\n> *In offline RL, there have been many uncertain-aware methods to deal with the distributional shift problem, such as [2] and [3].*\\n\\nWhile MOBILE and PBRL make valuable contributions to robust offline RL, there is a fundamental difference in objectives and methodology. MOBILE and PBRL focus on robustness against OOD scenarios by penalizing uncertain actions during offline training. Their primary goal is to learn conservative policies that avoid OOD situations. This is achieved through uncertainty-based penalties in Q-value estimation and specialized sampling techniques. \\nIn contrast, our work addresses a distinctly different challenge: explicit OOD detection during deployment, **without direct interactions with the OOD scenarios**. Rather than just avoiding OOD actions, we aim to actively identify when the agent encounters novel scenarios. This capability is crucial for:\\n 1. Maintaining awareness of when the current policy might be unreliable\\n 2. Enabling informed decisions about when to request human intervention\\n 3. Guiding targeted data collection for policy improvement\\n\\nOur progressive environmental randomization approach specifically builds up the agent's ability to distinguish between in-distribution and OOD states, rather than just being robust to them. While robustness emerges as a beneficial side effect of our method during iterative fine-tuning, it is not the primary objective.\\nWe provided a discussion in Appendix C of the revision to more clearly articulate this distinction between robustness-focused approaches (like MOBILE and PBRL) and our detection-focused methodology.\"}", "{\"summary\": [\"The paper introduces Uncertainty-aware Adaptive RL (UARL), an innovative framework to tackle distributional shifts and out-of-distribution (OOD) issues when deploying reinforcement learning (RL) policies in real-world environments. This is accomplished by implementing OOD detection to quantify policy uncertainty and iteratively refine high-uncertainty regions (of the state space), adapting the policy\", \"for safe and effective performance deployment. UARL demonstrates several notable advancements,\", \"A method for quantifying policy uncertainty using OOD detection.\", \"An offline-to-online (O2O) adaptation strategy that balances online and offline data, utilizing a diverse ensemble of critics to better handle distributional shifts.\", \"Experiments on MuJoCo continuous control tasks that validate UARL\\u2019s effectiveness in terms of performance, robustness, and sample efficiency.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"- UARL presents a compelling approach to address the challenges of deployment a policy in RL. The progressive expansion of state space via repulsive locations and a balanced replay buffer to manage data distribution shifts are novel and theoretically sound.\\n-The usage of an ensemble of diverse critics to perform OOD detection and policy refinement represents a robust methodology that has support from the experimental result\", \"weaknesses\": [\"The paper could better highlight its unique contributions compared to existing OOD and ensemble-based offline RL methods. A clearer differentiation of UARL's specific advancements would help underscore its novelty within the landscape of similar approaches.\", \"The experimental validation, limited to few environments such as the Ant-v4 and HalfCheetah-v4 environments, may not fully capture the method\\u2019s effectiveness across a diverse range of tasks. Extending the experiments to include more varied environments would provide a more comprehensive assessment and enhance the generalizability of the results.\", \"A comparison with recent state-of-the-art methods, such as PBRL[1], RORL[2], would strengthen the empirical evaluation. By benchmarking UARL against PBRL and similar approaches, the paper could provide a more robust validation of its improvements in uncertainty handling and performance stability.\", \"[1] Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning\", \"[2] RORL: Robust Offline Reinforcement Learning via Conservative Smoothing\"], \"questions\": \"1. Comparing the computational overhead of your method with that of baseline algorithms would strengthen your work. Could you include this information to provide a clearer understanding of its efficiency?\\n\\n1. Does this algorithm fall within the scope of Offline Reinforcement Learning? If so, it would be helpful to clarify its placement within the Offline Reinforcement Learning landscape. Enhancing the abstract and introduction to better position the algorithm within this broader context would significantly improve the clarity and impact of your paper.\\n\\nI am open to raising my score based on these improvements.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' Comment\", \"comment\": \"As the discussion period draws to a close, we wanted to follow up to check if you have had a chance to review our rebuttal. If you have any remaining questions or concerns, we would be glad to address them. Thank you.\"}", "{\"metareview\": \"The paper prpoposes Uncertainty-aware Adaptive RL (UARL) to enhance policy generalization across diverse variations of a given environment. UARL frames distribution shifts as OOD generalization and uses an OOD detection method to quantify uncertainty. The main weaknesses of the paper pertain to lack of simplicity of the approach, comparison to other works in off-dynamics RL, lack of ablations of all components of the approach, and unclear effectiveness of their approach as tasks are scaled further. More concretely, personally I think that the claim \\\"UARL can generalize to OOD environments without running policies in the OOD environment\\\" needs more justification. It is also unclear how one would tune hyperparameters of the method in an actual real-world deployment, which is the motivation of this approach.\\n\\nGiven the competitiveness of papers this year, and so many unclear bits surrounding this paper, I agree with the reviewers' overall opinions and decide to unfortunately reject the paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer FRBA raised some interesting points, largely some which I agree with. I would suggest that authors look at these points and try to incorporate major changes to ensure that comparisons to other baselines and methods are accounted for. While I agree that there are minor differences between algorithms / settings, please also do note that these comparisons will also generally help readers and practitioners make better sense of your method.\"}", "{\"summary\": \"The paper proposes a novel pipeline to detect OOD environment variations and gradually fine-tunning the agent until high confidence safe deployment is possible.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The overall method is well-motivated and clearly stated.\\n2. Weighting samples in the normal dataset and repulsive dataset differently is intuitive and is demonstrated to be effective empirically.\\n3. Using a set of critiques and its variances as a measure of environmental uncertainty explore new possibilities from the existing DENN method.\", \"weaknesses\": \"1. Repulsive locations discussion could be written more formally and thus more concisely. The current version is a bit too dense verbally. I'm also a bit confused by Figure 2, while it's a nice visual, is it something derived from the experiments or is it just a conceptual illustration?\\n2. Lack of related works: changing environment parameters to achieve repulsive locations is quite related to the literature on curriculum learning. A good survey to start with is https://arxiv.org/pdf/2003.04960. Also, blindly varying the environmental parameters may lead to unexpected harmful environments dampening the agent training: https://openreview.net/forum?id=hp4yOjhwTs&noteId=vZMeHQbnJK \\nI would suggest authors add a subsection for curriculum reinforcement learning in the related work for a more thorough introduction to the problem backgrounds.\", \"questions\": \"1. Typo: Algo 1 line 7, should it be \\\"OOD\\\" instead of \\\"ODD\\\"?\\n2. How do you define \\\"progressively expanding the randomization range\\\" for different environment parameters? More specifically, increasing friction by 1% and increasing the agent's mass by 1% may have vastly different impacts on the task difficulties. Could you discuss more on the relative impact of changing each parameter to the environment difficulty?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' Response - Part 2\", \"comment\": \"> *Is there a performance or computational advantage of UARL over direct processing of $E_\\\\omega$'s using the Robust RL algorithm or the algorithm with domain randomization techniques? Can this be illustrated experimentally?*\\n\\nUARL is designed to handle OOD scenarios without relying on direct interactions with the target environment. Unlike Robust RL or domain randomization, which often require extensive data or access to the full distribution of environments, UARL focuses on modeling uncertainty to generalize effectively across diverse conditions. The goal is to minimize dependence on refining a policy on real-world data, making it safer and more practical for applications where direct interactions with the target environment are risky or infeasible.\\n\\n> *Notice that in offline training (1st iteration), EDAC performs much worse than CQL and TD3+BC in many environments, which doesn't seem to match the experimental results in the EDAC article?*\\n\\nWe believe this discrepancy may be due to EDAC's potential overfitting to the original D4RL dataset. As mentioned in Section 5, for all experiments and all baselines, we used the implementation and hyperparameters provided at https://github.com/corl-team/CORL, applying the same consistency across other algorithms like EDAC, CQL and TD3-BC.\\n\\n> *The experiments in this paper were all performed in Mujoco, how do we obtain the real-world demonstration dataset $\\\\mathcal{D}_\\\\omega$ in the simulation environment like Mujoco?*\\n\\nSince Mujoco is very different from real-world scenarios, there is no real data. In our current experiment, the real-world demonstration dataset $\\\\mathcal{D}_w$ is obtained by running the \\u201cexpert\\u201d policy on the environment with a wide range of values for randomized parameters that differ from the original Mujoco environment, which acts as \\u201creal-world data\\u201d.\\n\\nWe should have clarified this point further in the paper. Thanks for bringing it into our attention. We included it in the revision, Section 5.\"}", "{\"title\": \"Authors' Response - Part 1\", \"comment\": \"We sincerely thank you for your insightful feedback and detailed comments on our work. Your suggestions have significantly contributed to improving the clarity and presentation of our research.\\n\\n> *The paper could better highlight its unique contributions compared to existing OOD and ensemble-based offline RL methods. A clearer differentiation of UARL's specific advancements would help underscore its novelty within the landscape of similar approaches.*\\n\\nWe understand the reviewer\\u2019s concerns regarding the framing and narrative of the paper in its current format. We will work to adjust the structure and focus to clarify our main contributions and streamline the discussion around key topics. Specifically, we will refine the emphasis on our central problem domain to create a more cohesive and targeted narrative, reducing the scope of secondary discussions where possible. The changes are reflected in Sections 1 and 2 in the revision.\\n\\n> *The experimental validation, limited to few environments such as the Ant-v4 and HalfCheetah-v4 environments, may not fully capture the method\\u2019s effectiveness across a diverse range of tasks. Extending the experiments to include more varied environments would provide a more comprehensive assessment and enhance the generalizability of the results.*\\n\\nWhile we believe our experiments provide a solid foundation for validating our method's core principles, we acknowledge the value of broader testing. That is why we are working on providing results on more environments to improve the generalizability of our findings.\\n\\n> *A comparison with recent state-of-the-art methods, such as PBRL[1], RORL[2], would strengthen the empirical evaluation.*\\n\\nWe want to clarify an important distinction between our work and RORL/PBRL that we should have emphasized more clearly in the paper. While RORL and PBRL represent significant advances in robust offline RL, they tackle a fundamentally different problem than our work. These methods focus on building robust policies that can maintain performance despite encountering OOD scenarios, primarily through uncertainty-based penalization during training. This is distinct from our core objective: developing reliable mechanisms to **detect** when a system is operating in OOD conditions.\", \"the_distinction_becomes_clear_when_considering_real_world_applications\": \"a robust policy might continue operating in OOD conditions (as RORL/PBRL aim to achieve), but this could be undesirable in safety-critical systems where we need to explicitly recognize such situations and potentially halt operation or seek human guidance. Our method's progressive environmental randomization serves to build this detection capability, training the uncertainty estimator to recognize the boundaries between familiar and novel situations.\\n\\nWhile our approach does yield some robustness benefits through its iterative fine-tuning, this is secondary to its main purpose of reliable OOD detection. We revised the paper to better articulate this fundamental difference in objectives and clarify how our method specifically targets the detection challenge rather than just robustness, reflected in Appendix C. \\n\\nWe deemed PBRL and RORL are not directly comparable to our work; nevertheless, we included results for a limited experimental setting comparing our method with PBRL and RORL in Appendix B.5 of the revised paper for the readers\\u2019 reference.\\n\\n> *Comparing the computational overhead of your method with that of baseline algorithms would strengthen your work. Could you include this information to provide a clearer understanding of its efficiency?*\\n\\nOur method does introduce additional computational complexity, primarily through the repulsive dataset, which impacts memory usage more significantly than computation time. Most baseline methods (CQL, AWAC, TD3+BC) already use dual-critic architectures, so our additional computational overhead stems from maintaining separate nominal and repulsive datasets, the diversity loss calculation, and the replay buffer balancing mechanism.\\n\\nOur measurements suggest the memory usage was increased by approximately 50% compared to baseline methods, primarily due to storing both nominal and repulsive datasets. That is also subject to change given the desired ratio of nominal data points to repulsive ones. The computational time overhead is relatively modest, in the 10-20% range, and is equivalent to training a regular actor-critic method with a larger batch size\\n\\nOur results suggest that uncertainty awareness provides significant value that outweighs the additional computational requirements.\\nWe included these results in Appendix D of the revision.\"}", "{\"title\": \"Authors' Response - Part 2\", \"comment\": \"> *in Equation 5, you wrote $R(s,a)$ in the bellman error, while $r$ in the diversity term $\\\\mathcal{L}_{div}^{RL}$. I think they should be identical, right?*\\n\\nThanks for bringing this to our attention. It is indeed a mistake. We fixed it in the revision.\\n\\n\\n> *the authors do not discuss the limitations of their method in the main text or the appendix*\\n\\nWe addressed the limitations of UARL in the Conclusion and provided a more detailed discussion in Appendix D of the revised version.\\n\\n> *the performance improvement of UARL seems limited and incremental on some tasks (e.g., see Figure 3)*\\n\\nThat is a valid observation, as already noted in lines 454 and 455. However, we would like to emphasize again that UARL is not primarily focused on performance improvements. Its main purpose is to assist with OOD detection. The point made in Section 5.1 is to demonstrate that the introduction of UARL does not negatively impact the performance of the underlying algorithms.\\n\\n\\n> *UARL can still suffer from performance degradation during the fine-tuning phase (e.g., see Figure 5)*\\n\\nThis is also a valid observation; however, the balanced replay buffer mechanism in UARL (Section 4.3) results in less performance degradation compared to baseline methods, as shown in Figure 5.\\n\\n> *In Lines 184-185, you wrote, disagreement among ensemble models, particularly at the boundary of the training distribution, what do you exactly mean by at the boundary of the training distribution?*\\n\\nBy \\\"at the boundary of the training distribution,\\\" we refer to data points that lie near the edge of the training data distribution, where the model has less certainty and potentially higher variance in its predictions. These points are typically where the model might struggle to generalize, as they are far from the majority of training data, and thus, where disagreement among ensemble models is most valuable. Introducing diversity at these boundary points (or in OOD regions) leads the ensemble models to be more diverse OOD. We enhanced Section 3.2 in order to make this point clearer.\\n\\n> *what are the advantages of the diversity term in Equation 5 compared to other diversity terms?*\\n\\nEDAC aims to prevent over-estimation of Q-values when training with OOD samples, for a conservative training process. This is achieved by diversifying the gradients of Q-values.\\n\\nUARL enforces the diversity explicitly in the critic output space, which is the space of interest since this is where we compute the critic variance. In this paper, we focused on an empirical evaluation of our approach. Notably, Fig. 4 illustrates the benefits of UARL-enhanced AWAC compared to the base AWAC method. The diversity of UARL leads to a higher ability to discern ID from OOD samples, while maintaining a high performance (Fig. 3).\\n\\n> *How can the authors tell that the uncertainty measurement provided in this paper is valid?*\\n\\nOur primary goal with UARL is to detect OOD events rather than to provide an exact estimate of uncertainty. While uncertainty estimation plays a role in guiding the detection process, the focus is on identifying regions where the policy is likely to encounter novel dynamics. We acknowledge that uncertainty calibration is an important consideration for uncertainty estimation methods, but due to the nature of UARL, we do not expect perfect calibration. Instead, we focus on using the uncertainty measurements to highlight areas of high risk and improve OOD detection. We will consider further comparisons with other uncertainty estimation methods in future work, but the current approach is primarily validated through its ability to detect OOD events effectively in our experiments.\\n\\n> *do you have any parameter study on the threshold parameter in Algorithm 2?*\\n\\nThreshold at Algorithm 2 acts as a stopping criterion in fine-tuning, allowing the agent to expand the state space until the variance ($V_Q$) of the critic ensemble on the real-world dataset ($D_w$) falls below it. A lower threshold enforces a stricter certainty requirement, enhancing safety by requiring more consistent value estimates but potentially increasing training time. In contrast, a higher threshold could lead to earlier deployment, risking premature exposure to uncertain scenarios. Tuning the threshold may indeed vary per task, as each environment\\u2019s dynamics can impact the balance between safety and efficiency. While $V_Q \\\\le \\\\text{threshold}$ provides a proxy for confidence, it is not a formal safety guarantee. True safety would benefit from future work on approaches such as formal verification, explicit safety constraints in learning, or conservative policy updates, which go beyond the current paper\\u2019s scope. However, per the reviewer\\u2019s suggestion, we will conduct a study on the role of the threshold and its impact on performance.\"}" ] }
0Wl6h2CZeJ
RealTracker: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos
[ "Nikita Karaev", "Iurii Makarov", "Jianyuan Wang", "Natalia Neverova", "Andrea Vedaldi", "Christian Rupprecht" ]
Most state-of-the-art point trackers are trained on synthetic data due to the difficulty of annotating real videos for this task. However, this can result in suboptimal performance due to the statistical gap between synthetic and real videos. In order to understand these issues better, we introduce RealTracker, comprising a new tracking model and a new semi-supervised training recipe. This allows real videos without annotations to be used during training by generating pseudo-labels using off-the-shelf teachers. The new model eliminates or simplifies components from previous trackers, resulting in a simpler and smaller architecture. This training scheme is much simpler than prior work and achieves better results using 1,000 times less data. We further study the scaling behaviour to understand the impact of using more real unsupervised data in point tracking. The model is available in online and offline variants and reliably tracks visible and occluded points.
[ "Point tracking", "Optical flow", "Motion estimation", "Pseudo labelling" ]
Reject
https://openreview.net/pdf?id=0Wl6h2CZeJ
https://openreview.net/forum?id=0Wl6h2CZeJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zSHb4aNnkw", "tm5UkzC5p3", "oSrD1TAwCc", "l1jKzBgiwS", "eW2UR4l6v9", "ZV2Jo5QNdr", "YbYtar3dq3", "OK5hnQc91e", "FTvmJAs5IS", "Aby61eQRsG", "6hQfmMnV3i" ], "note_type": [ "official_comment", "official_review", "decision", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review" ], "note_created": [ 1732576081235, 1730660441132, 1737523549287, 1730562867835, 1731120345225, 1732576417967, 1732576314762, 1732576223893, 1732576021259, 1730875638388, 1734773087621 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3032/Authors" ], [ "ICLR.cc/2025/Conference/Submission3032/Reviewer_MNzP" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3032/Reviewer_4EZJ" ], [ "ICLR.cc/2025/Conference/Submission3032/Reviewer_Gpaf" ], [ "ICLR.cc/2025/Conference/Submission3032/Authors" ], [ "ICLR.cc/2025/Conference/Submission3032/Authors" ], [ "ICLR.cc/2025/Conference/Submission3032/Authors" ], [ "ICLR.cc/2025/Conference/Submission3032/Authors" ], [ "ICLR.cc/2025/Conference/Submission3032/Reviewer_PQwv" ], [ "ICLR.cc/2025/Conference/Submission3032/Area_Chair_xtyL" ] ], "structured_content_str": [ "{\"comment\": \"> Using pseudo-labels for training trackers is well explored, e.g., Dino-Tracker with precomputed flow, CoTracker3 with pseudo-labelling. Please illustrate more differences with these trackers for better highlighting the contributions.\\n\\nDinoTracker is a test-time optimization method, which means it needs to be optimized for each video during inference. Fitting DinoTracker to a single video with 100 frames takes about 1.6 hours. In contrast, we pretrain a network on a real dataset using pseudo labels and then evaluate it on five different benchmarks without additional test-time optimization. RealTracker operates in real time (90 frames per second for 100 points). Also, optical flow is one out of five losses used for test-time optimization in DinoTracker. Here we supervise with a single loss for pseudo labelled tracks. CoTracker3 was released after the ICLR deadline.\\n\\n\\n> Are there any specific concerns for choosing a teacher model for pseudo label generation? Does the better teacher model commonly lead to better tracking performance? Can a single teacher model well support the tracker learning?\\n\\nIn Tab. 5 we ablate different teacher setups and show that adding any teacher improves performance, even if the teacher itself is worse than the student model. The more diverse the teachers, the better the result. It appears that the student model absorbs complementary knowledge from different teachers even if they're worse than the student model. A better teacher model indeed leads to better tracking performance (adding CoTracker or another RealTracker as a teacher is better than adding TAPIR). Interestingly, a single teacher (the model itself) also improves the tracking results (see Tab. 4)!\\n\\n> In Table 2, the time of the per frame and per tracked point is shown. For the online variant, what\\u2019s the overall tracking speed (i.e., fps) given an online testing video?\\n\\nThe speed of the online model is 25 frames per second for 1000 simultaneously tracked points, 90 frames per second for 100 points.\\n\\n\\n> Missing Refs for discussion. For completeness, please include more pseudo-label based tracker training approaches [1,2,3,4] for discussion in the related work.\\n\\nThank you for pointing this out, we have cited these works in the revised version of the paper:\\n\\n\\nProgressive Unsupervised Learning for Visual Object Tracking[1] introduced an unsupervised learning framework that entirely removed the need for annotated videos in visual tracking.\\n\\n\\nUnsupervised Learning of Accurate Siamese Tracking[2] proposed an unsupervised learning framework based on siamese networks for training trackers with cycle consistency.\\n\\nDINO-Tracker[3] combines test-time per-video optimization with DINO features to improve point tracking.\\n\\nCoTracker3[4] introduces a more efficient architecture and a simple pseudo-labelling pipeline to further improve its performance by training on real data.\"}", "{\"summary\": \"the approach leverages other point trackers to produce training data for their point tracker. Supposedly less additional training data is required compared to other point trackers. The biggest contribution is that the other trackers use real data and not synthetic data for training. Other approaches in the past have typically used point data for tracking.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper incrementally builds upon point trackers by producing a better approach that leverages other point trackers to produce supervised training data. In the past other trackers have used synthesized data however this is all based on real data. The results seem to better than other point trackers.\", \"weaknesses\": \"It is not clear on what type of motions were tested, if parallax for motion is required, what about zooming like motions with no parallax, does the method work.\\nWhat % of occlusion in terms of coverage of the object and in terms of time occluded were not clearly tested.\\nThe limitations and failure cases of the algorithm were not explored.\", \"questions\": \"From Table 2, it appears that the training set does matter in the results, The methods training with Kub+15M performed on average better than the methods trained with Kub, please explain and elaborate. What is the difference?\\nWhy does the offline method perform better than the online method, Intuitively I would assume the opposite?\\nWhat are the limitations and failure cases?\\nTable 6, why does SIFT turn on the best results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces the RealTracker, a point tracker that combines several ideas from other related trackers but eliminates some components and simplifies others. RealTracker also designes a semi-supervised training protocol, where real videos are annotated utilizing several off-the-shelf trackers. With this protocol, RealTracker can achieve encouraging results on the Kinetics, RGB-S, and DAVIS datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. RealTracker combines valuable ideas from the recent state-of-the-art point trackers and eliminates some unimportant modules.\\n2. RealTracker proposes a simple semi-supervised training protocol and achieves better results on several public datasets compared to state-of-the-art trackers.\\n3. RealTracker explores the training scaling low via its proposed training protocol.\", \"weaknesses\": \"1. The idea of using trackers to annotate unlabeled datasets, such as [1], is not new.\\n2. The authors should use the Kub+15M data to train the CoTracker and TAPTR and verify the proposed method's effectiveness.\\n3. To prove the effectiveness of the RealTracker, it is suggested that confidence and visibility be visualized.\\n4. More ablation studies are suggested to verify that eliminating some modules in the listed trackers and simplifying some modules is useful, including the computation cost and tracking performance.\\n\\n[1] Muller M, Bibi A, Giancola S, et al. Trackingnet: A large-scale dataset and benchmark for object tracking in the wild[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 300-317.\", \"questions\": \"Please follow the weakness. If the issues are addressed, I will improve the rating.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a simpler and better point tracking approach by pseudo-learning real videos. Specifically, the proposed approach allows real videos without annotations to be used during training by generating pseudo-labels using off-the-shelf teachers. The proposed approach explores to use real video for training point tracking models w/o annotations. Moreover, the authors also study the scaling law to understand the impact of using more real training videos.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper focuses on an interesting problem in the community, i.e., aiming to explore to train TAP models w/ real videos w/o annotations, since the previous approaches mainly focus on learning w/ synthetic datasets;\", \"The proposed RealTracker shows that a simpler architecture and training protocols can outperform SOTA trackers like BootsTAPIR and LocoTrack;\", \"The paper is well written and organized;\"], \"weaknesses\": \"- Using pseudo-labels for training trackers is well explored, e.g., for some online learning-based trackers like Dino-Tracker, it uses pre-computed optical flow which provides the pseudo ground truth pixel-level correspondences for online training the tracker. For DinoTracker3, pseudo-labelling is explored. Please illustrate more differences with these trackers for better highlighting the contributions;\\n- Are there any specific concerns for choosing a teacher model for pseudo label generation? Does the better teacher model with higher tracking performance commonly lead to better tracking performance? Can a single teacher model well support the tracker learning?\\n- In Table 2, the time of the per frame and per tracked point is shown. For the online variant, what\\u2019s the overall tracking speed (i.e., fps) given an online testing video?\\n- Missing Refs for discussion. For completeness, please include more pseudo-label based tracker training approaches [1,2,3,4] for discussion in the related work.\\n\\n[1] Progressive Unsupervised Learning for Visual Object Tracking;\\n\\n[2] Unsupervised Learning of Accurate Siamese Tracking;\\n\\n[2] DINO-Tracker: Taming DINO for Self-Supervised Point Tracking in a Single Video;\\n\\n[3] CoTracker3: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos;\", \"questions\": \"Overall, I think this is an interesting paper that focuses on an essential problem in the community, i.g., enabling existing TAP trackers to leverage real videos w/o annotations for training. The idea is somewhat incremental but effectively addresses an essential problem in a simple yet effective way. Thus my current rating is ``accept''. I would like to see more author rebuttal in terms of differences w/ existing pseudo label based approaches as mentioned above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> The idea of using trackers to annotate unlabeled datasets, such as [1], is not new.\\n\\nThank you, we have cited [1] TrackingNet by Muller et al. in the revised version. Point tracking however is different from object tracking in an important way: it requires not only localizing the same object in another frame, but also **a specific point on the same object**, which makes the task more difficult.\\nHere we work with point tracking and show that synthetic-trained teacher trackers **don't have to be better** than the student tracker to improve the student. For example, **synthetic-trained RealTracker is better than all its synthetic-trained teachers** and its performance still improves significantly after training with our pipeline.\\n\\n\\n> The authors should use the Kub+15M data to train the CoTracker and TAPTR and verify the proposed method's effectiveness.\\n\\nThe 15M dataset (15M from Kub+15M) introduced in BootsTAPIR was not released and nobody has access to this data. However, with our method, we improve over BootsTAPIR with much less data: RealTracker is better after training on just 15 **thousand** real videos compared to 15 **million** in BootsTAPIR. \\nIn Fig. 1 we show how CoTracker and LocoTrack improve after training with our pipeline on up to 100 thousand real videos. We don't have enough data and resources to scale training any further (to the proposed 15 million videos as was done in BootsTAPIR).\\n\\n> To prove the effectiveness of the RealTracker, it is suggested that confidence and visibility be visualized.\\n\\nWe visualize visibility in Fig. 4 and in the supplementary videos (`index.html`, 5 videos in the header and videos from \\\"Object-centric tracking on a regular grid\\\"), where visible points are filled with color, and invisible points are left empty. \\nConfidence simply estimates whether the point is within 12 pixels from the ground truth and allows to improve visibility predictions by discarding points with low confidence. So, confidence is already a part of the visualized visibility: `final_visibility = (visibility * confidence) > 0.6`\\n\\n> More ablation studies are suggested to verify that eliminating some modules in the listed trackers and simplifying some modules is useful, including the computation cost and tracking performance.\\n\\nThank you for pointing this out. Below, we show ablations for re-adding the removed 4D correlation encoder or global matching. We ablated the online model, showing the average performance on TAP-Vid. Speed is measured in frames per second for the simultaneous tracking of 100 points:\\n\\n|\\t| AJ\\u2191\\t| $\\\\delta_\\\\mathrm{avg}$\\u2191| \\tOA\\u2191\\t| Speed\\u2191 | Number of parameters\\u2193 |\\n| :---: |:---: | :---: | :---: | :---: | :---: |\\n| RealTracker\\t| 64.0\\t| 76.8\\t| **90.2**\\t| **90 fps** | **25M** |\\n| RealTracker + 4D corr encoder\\t| **64.3**\\t| 76.9 | 90.1 |\\t63 fps | 28M |\\n| RealTracker + Global matching\\t| 63.6\\t| **77.0** | 89.9\\t| 82 fps | 26M |\\n\\nThe impact of these modules on performance is small and inconsistent across metrics, while the affect the speed of the method. We thus find that they are not necessary to be included in RealTracker.\\n\\nWe are open to other suggestions for interesting experiment from the reviewer.\"}", "{\"comment\": \"> It is not clear on what type of motions were tested, if parallax for motion is required, what about zooming like motions with no parallax, does the method work. What % of occlusion in terms of coverage of the object and in terms of time occluded were not clearly tested.\\n\\nThe method works for any types of motions, including zooming motions. \\nThank you, we've conducted two additional experiments with occlusions on TAP-Vid DAVIS for the rebuttal. In both of these experiments, we investigate the effect of occlusions of different sizes and lengths on the tracking accuracy on TAP-Vid DAVIS. Specifically, we occlude all tracked points with black circles of different sizes for several consecutive frames and measure how this affects tracking accuracy of RealTracker offline while tracking query points (See appendix of the revised version of the paper for a visual explanation.)\\n\\nIn the first experiment, we occlude all tracked points in each video for half of the video length, starting right after the query frame. The occluding circle is centered in the ground truth track. We vary the radius of the occluding circle and increase it from 0% to 100% of the video width. The reason why tracking accuracy is still 19.9% with a radius of 100\\\\% is that the model sees the second half of the video and can track points there. If we occlude all the frames with a radius of 100%, the accuracy drops to 2%. \\n\\n| occlusion radius in % of image width | $\\\\delta_\\\\mathrm{avg}$\\u2191 |\\n| :---: |:---: |\\n| 0 | 76.8 |\\n| 4| 48.8 |\\n| 8| 42.2 |\\n| 12| 39.8 |\\n| 20| 36.4 |\\n| 40| 30.2 |\\n| 80| 23.3 |\\n| 100| 19.9 |\\n\\nIn the second experiment, we fix the radius to 8\\\\% of the image width and vary the duration of the occlusion from 0 to 100\\\\% of the video length, with average video length being 60 frames. Short occlusions of 20\\\\% of video length (12 frames on average) affect performance, but not significantly: accuracy drops from 76.9\\\\% to 61.3\\\\%. When occluding points for the whole duration of the video (100\\\\%), the model can still somewhat predict where these points are thanks to supporting grid points.\\n| occlusion duration (frames) | $\\\\delta_\\\\mathrm{avg}$\\u2191 |\\n| :---: |:---: |\\n| 0 | 76.8 |\\n| 20| 61.3 |\\n| 40| 48.2 |\\n| 60| 37.5 |\\n| 80| 29.9 |\\n| 100| 21.1 |\\n\\n\\n\\n> From Table 2, it appears that the training set does matter in the results, The methods training with Kub+15M performed on average better than the methods trained with Kub, please explain and elaborate. What is the difference? \\n\\nKubric (Kub) is a synthetic dataset. Kub + 15M means synthetic pre-training with fine-tuning on 15 **million** real videos. Kub + 15k is synthetic pre-training and finetuning on 15 **thousand** real videos. In this table we show that fine-tuning improves the results over just synthetic pretraining, and that RealTracker is better after training on 15 **thousand** videos compared to BootsTAPIR trained on 15 **million** videos. BootsTAPIR did not release their 15M training video dataset so it is not reproducible.\\n\\n> Why does the offline method perform better than the online method, Intuitively I would assume the opposite?\\n\\nThe offline method has access to all the video frames at once (more context than the online method that does sequential processing), so the offline method can deal better with occlusions. The offline method is able to reason across the whole video and thus can track points forward and backward in time, while the online version operates in a sliding-window manner and can track only forwards.\\n\\n> What are the limitations and failure cases?\", \"we_discuss_limitations_of_the_proposed_method_in_the_appendix_of_the_paper\": \"A key limitation of our pseudo-labeling pipeline is its reliance on the quality and diversity of teacher\\nmodels. The observed saturation in performance on TAP-Vid during scaling suggests that the student model absorbs knowledge from all the teachers and, after a certain point, struggles to improve further. Thus, we need stronger or more diverse teacher models to achieve additional gains for the student model.\\n\\nThank you, we've now included failure cases in the revised version of the paper. Please see the revised version for visuals.\", \"featureless_surfaces_is_a_common_mode_of_failure\": \"the model cannot track points sampled in the sky or on the surface of water. Other common sources of failure are tracking shadows of objects and tracking through long occlusions.\\n\\n> Table 6, why does SIFT turn on the best results?\\n\\nSift is just slightly better. The model is more or less indifferent to the choice of point sampling.\"}", "{\"comment\": \"> The methodology appears to be more engineering-oriented rather than theoretically innovative. The pseudo-label fine-tuning approach is relatively common. The technical contributions seem somewhat limited.\\n\\nRealTracker is the first paper to analyse scaling effects of training point trackers on pseudo labels (see Fig. 1). In this paper, we show that it is possible to improve the performance of any synthetic-trained point tracker on real data using only other **synthetic-trained** teachers. Moreover, these teacher trackers **don't have to be better** than the student tracker. For example, **synthetic-trained RealTracker is better than all its synthetic-trained teachers** and its performance still improves significantly after training with our pipeline. \\n\\n> The model's improvement of performance is heavily dependent on the teacher model's capabilities. This strong reliance on existing methods' performance creates a ceiling effect where the training results are constrained by the teacher model's performance limits, potentially reducing the method's generalizability.\\n\\nThe model improves with more teachers even if they are all trained on the same dataset (Kubric) and even if all of them are worse than the student model (Tab. 5). So, the student model is not bounded by their performance, it just saturates after absorbing knowledge from training with different teachers for a while. This allows to improve any existing point tracker, even if it is already better than all the teacher models (for example, RealTracker, Tab. 5). In Fig. 1, we show the universality of the proposed pipeline: it improves other point trackers, such as LocoTrack and CoTracker. Interestingly, using only the model itself as a teacher also improves the results (Tab. 4).\\n\\n> The paper lacks substantial technical innovation in terms of cross-domain adaptation techniques. The approach merely relies on real-data fine-tuning and teacher model voting effects for enhanced robustness, neither of which represents a significant contribution to the field of domain adaptation.\\n\\nThis is the first paper to systematically explore self-training on real data for point tracking. The state-of-the-art point tracker BootsTAPIR is trained on **15 million** real videos. Here we show that it is possible to outperform it by training only on **15 thousand** real videos with a simpler training protocol.\\nThe fact that we propose a simple method to do so is, in our view, a feature: we set a strong baseline for others to build on. In addition, we show for the first time how these trackers scale with the amount of training data, which is in its own right an important empirical analysis that will guide future research. We also obtain a result which is practically useful and important. Just like many are building on trackers like CoTracker2, we expect that many will take advantage of our new tracker. Besides, we also provide a substantially better tracker architecture, which is not only state-of-the-art, but faster and simpler that those it superseeds. We believe that this will also be of great interest to the community.\\n\\n> The terminology \\\"self-supervised fine-tuning\\\" is indeed questionable in this context.\\n\\nWe agree that this method is more aligned with pseudo-labelling approaches, we have replaced all mentions of \\\"self-supervised training\\\" with \\\"pseudo-labelling\\\" in the revised version.\"}", "{\"comment\": \"We thank all reviewers for their thoughtful feedback. The reviewers find that RealTracker [R1]`is an interesting paper that focuses on an essential problem in the community` with a [R2]`well-justified motivation`. RealTracker's [R2] `visualization results are particularly impressive in demonstrating the model's capabilities`, it [R3,R4] `achieves better results on several public datasets compared to state-of-the-art trackers`. We address reviewers' comments below and have already incorporated their feedback in the uploaded revised version.\"}", "{\"summary\": \"1. The authors address the redundancy in modules of various existing point tracking models and propose RealTracker, a network with simplified architecture that achieves better performance and faster processing speed.\\n\\n2. The authors leverage existing models to generate pseudo-labels for real video data, enabling effective utilization of unlabeled videos for network fine-tuning, which further enhances performance.\\n\\n3. The authors analyze the impact of real data scale on the network model's performance, providing insights into the relationship between dataset size and tracking effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper's motivation is well-justified, particularly in its approach to eliminate model redundancies, resulting in a more lightweight yet powerful architecture.\\n2. The paper demonstrates effective utilization of unlabeled real-world datasets for training, achieving significant performance improvements through this approach.\\n3. The experimental analysis is comprehensive, and the visualization results are particularly impressive in demonstrating the model's capabilities.\", \"weaknesses\": \"1. The methodology appears to be more engineering-oriented rather than theoretically innovative, primarily consisting of combinations and modifications of existing methods. The pseudo-label fine-tuning approach is relatively common. Given this is a **deep learning conference**, the technical contributions seem somewhat limited.\\n2. As acknowledged in the limitations section, the model's improvement of performance is heavily dependent on the teacher model's capabilities. This strong reliance on existing methods' performance creates a ceiling effect where the training results are constrained by the teacher model's performance limits, potentially reducing the method's generalizability.\\n3. The authors aim to bridge the domain gap using real-world dataset training. However, the paper lacks substantial technical innovation in terms of cross-domain adaptation techniques. The approach merely relies on real-data fine-tuning and teacher model voting effects for enhanced robustness, neither of which represents a significant contribution to the field of domain adaptation. More sophisticated cross-domain strategies or novel technical approaches would have strengthened the paper's contribution in addressing the domain gap problem.\", \"questions\": \"1. The terminology \\\"self-supervised fine-tuning\\\" is indeed questionable in this context. Using state-of-the-art models from the same domain to generate pseudo-labels for supervision is more aligned with teacher-student learning or pseudo-labeling approaches rather than traditional self-supervised learning, where the supervision signals are typically derived from the data itself without external models.\\n\\n2. The incorporation of domain adaptation strategies during the fine-tuning process would have significantly enhanced the paper's contribution. This could have included techniques specifically designed to address domain shift and better align feature distributions between source and target domains.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The article has received feedback from four reviewers, to which the authors have provided partial responses. However, the lack of intense discussion among the reviewers suggests that the paper may have limited novelty and topicality. It is hoped that the authors will take the reviewers' comments into consideration for further refinement of the manuscript and wish them success in future submissions. The current version of the paper will be rejected.\", \"additional_comments_on_reviewer_discussion\": \"1. The task setting of the article is not particularly novel; as the reviewers have pointed out, a similar design was employed by TrackingNet.\\n2. The experimental validation section remains insufficient and calls for further optimization.\"}" ] }
0VP3LuzZ8K
Generalization of noisy SGD under isoperimetry
[ "Leello Tadesse Dadi", "Volkan Cevher" ]
We study the generalization of iterative noisy gradient schemes on smooth non-convex losses. Formally, we establish time-independent information theoretic generalization bounds for Stochastic Gradient Langevin Dynamics (SGLD) that do not diverge as the iteration count increases. Our bounds are obtained through a stability argument: we analyze the distance between SGLD iterates on two datasets sampled from the same distribution. Our result only requires an isoperimetric inequality to hold, which is merely a restriction on the tails of the loss. We thus relax the assumptions of prior work to establish that the iterates stay within a bounded KL divergence from each other. Under an additional dissipativity assumption, we show that the stronger Renyi divergence also stays bounded by establishing a uniform log-Sobolev constant of the iterates. Without dissipativity, we side step the need for local log-Sobolev inequalities and instead exploit the regularizing properties of Gaussian convolution. These techniques allow us to show that strong convexity is not necessary for finite stability bounds and thus for finite generalization and differential privacy bounds.
[ "generalization", "langevin", "non-convex", "information theory" ]
Reject
https://openreview.net/pdf?id=0VP3LuzZ8K
https://openreview.net/forum?id=0VP3LuzZ8K
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tOmKVrA47y", "sjhYwJu4fX", "likxHFzm9C", "jpjdzSJEsj", "Zn35Bye6Ur", "YTGmmyDfIC", "V6jM0E1Uia", "SkkWKx3hNj", "NpJ0qdDMzU", "E8HuJFjYPv", "7UlUMHKQm2", "64WxFOF1yf", "4oZZS4dbqu", "39EY9NxtYd", "11QgPRN2G4" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "decision" ], "note_created": [ 1732584791374, 1732803027757, 1732080437899, 1732080490592, 1730618484875, 1732079721271, 1730715667830, 1732080653443, 1732548742456, 1730474615825, 1734716728031, 1732803425493, 1732079992421, 1730219773966, 1737524003810 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9755/Reviewer_DPmq" ], [ "ICLR.cc/2025/Conference/Submission9755/Authors" ], [ "ICLR.cc/2025/Conference/Submission9755/Authors" ], [ "ICLR.cc/2025/Conference/Submission9755/Authors" ], [ "ICLR.cc/2025/Conference/Submission9755/Reviewer_aiX5" ], [ "ICLR.cc/2025/Conference/Submission9755/Authors" ], [ "ICLR.cc/2025/Conference/Submission9755/Reviewer_J6MA" ], [ "ICLR.cc/2025/Conference/Submission9755/Authors" ], [ "ICLR.cc/2025/Conference/Submission9755/Reviewer_aiX5" ], [ "ICLR.cc/2025/Conference/Submission9755/Reviewer_4Kqp" ], [ "ICLR.cc/2025/Conference/Submission9755/Area_Chair_HERJ" ], [ "ICLR.cc/2025/Conference/Submission9755/Authors" ], [ "ICLR.cc/2025/Conference/Submission9755/Authors" ], [ "ICLR.cc/2025/Conference/Submission9755/Reviewer_DPmq" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"comment\": \"Thank you to the authors for their response and for addressing some of my concerns.\\n\\nHowever, I still have reservations about certain aspects of the results:\\n\\n1. The proof in Section 4 largely follows from previous work, offering limited novelty.\\n2. In Section 5, the LSI constant scales exponentially with the dimension, which raises concerns about its practicality.\\n3. The results in Section 6 are intriguing, particularly because they rely only on the LSI of the stationary distribution while achieving a time-independent KL-divergence bound. However, I find it difficult to appreciate the contributions in Theorem 18 and Corollary 20.1 fully, as the statements are obscured by the use of poly() terms and the ergodicity error term.\"}", "{\"comment\": \"Dear Reviewer DPmq,\\n\\nWe are sincerely grateful that you acknowledged our response. We would like to respond to your remaining reservations in the following.\\n\\n1. We had highlighted in our introduction to Section 4 that we first establish `a simplified proof template` on which our contributions in sections 5 and 6 build. Section 4 is setting up the following sections by identifying Theorem 6 as the crucial entrance point for our new results.\\n2. The central goal of our work is to establish a bound that does not diverge as the iteration count increases. Unlike some existing generalization bounds, `our bound is finite and does not scale with iteration counts` $k$ which today exceed $2^{32}$ (see for example training durations in Table 6 of [diffusion models](https://arxiv.org/pdf/2312.02696)). Second, `it is this dimension dependence that we mention is the very reason that motivated our section 6` where we mitigate for this dependence. \\n3. We appreciate that you find our result interesting. We have rewritten the introduction of section 6 in our revision to clarify the proof template. Moreover, `the terms in Theorem 18 are explicitly available in the appendix` (see equations 8 and 9 on line 503). The poly term in the Corollary is merely a combination of line 1153 with Lemma 32. We restate them here for your convenience, the poly term in Corollary 20.1 is\\n\\n $$\\n \\\\left(2\\\\eta\\\\beta^2L^2 + 2(1+\\\\frac{\\\\eta}{2\\\\beta})\\\\right)\\\\left(c_\\\\pi D_\\\\mathrm{KL}(X_0, \\\\pi) + \\\\frac{\\\\eta}{\\\\beta}(8dL^2 + 2\\\\sigma^2)\\\\right) + 8\\\\left(1+\\\\frac{\\\\eta}{2\\\\beta}\\\\right)\\\\left(c_\\\\pi' D_\\\\mathrm{KL}(X_0', \\\\pi') + \\\\frac{\\\\eta}{\\\\beta}(8dL^2 + 2\\\\sigma^2)\\\\right) \\n $$\\n The key of our result is not the exact form of the polynomial dependence above but in the fact that it _is polynomial_. This is the reason we kept the expansions in the appendix. The takeaway message is the following: at the cost of some additional polynomial terms, we can rely on the LSI of the stationary distribution instead of the per-iterate one. This is the key message (see also [our response here](https://openreview.net/forum?id=0VP3LuzZ8K&noteId=jpjdzSJEsj) for further details).\\n\\nWe thank you for your time and we remain available for further discussions.\"}", "{\"comment\": \"Dear Reviewer aiX5,\\n\\nWe are grateful for your thorough, well organized review and your detailed feedback. In the following, we would like to address the points you raise.\\n\\n1. `bounds in the literature` To strengthen our Related Work Section 3, we will add a table listing previous bounds considering iterative noisy gradient schemes in non-convex settings. Our focus in our paper is on bounds that apply specifically to noisy iterative algorithms and not on bounds that are agnostic to the learning algorithm, as is the case for most generalization bounds which only consider the function class.\\n\\n2. `lower bounds` Our motivation is to amend a gap in previous bounds which explode as the iteration count increases. Our dissipative bound, unlike previous results, matches the lower bound of (Chourasia et al 2021, Theorem 3), for $q=1$, as strongly convex functions are special cases of dissipative functions. Our bounds in section 6, however, are unlikely to be tight but they are a significant improvement over previous diverging bounds. \\n\\n 3. `experimental evaluation` We believe we are in a reverse situation where practice has already shown that large training times can perform well (see for instance the $2^{32}$ training iterations of state of the art [diffusion models](https://arxiv.org/pdf/2312.02696)) and generalization bounds that aligned with these observations were absent as most existing bounds diverged with training time. Our work is amending theory to make it match what is already practically observed. Our motivation is to show, theoretically, that there exist non-convex settings where long training runs are not harmful.\", \"technical_comments\": \"1. `O(1/sqrt(n)) vs O(1/n) bounds` For information-theoretic bounds the fast rate is indeed achievable but it comes with a trade-off: it becomes analytically difficult to obtain a decay factor to control the number of iterations. With current techniques, to obtain the fast rate one has to accept a diverging bound, see Proposition 15 and Corollary 16 of [D]. Our goal is to characterize algorithms that are run for a large number of iterations. For this reason, we cannot use $1/n$ generalization bounds that only depend on the function space nor can we use information-theoretic bounds which are not tractable when applied to noisy iterative algorithms.\\n\\n2. `Conversion of R\\u00e9nyi DP to DP` We are very grateful for this reference, we were unaware of this refinement. We will include the improved conversion in Lemma 3.\\n\\n3. `the dependence on the order of R\\u00e9nyi divergence` For the privacy result, indeed, your observation that the decay factor in (Chourasia et al 2021) does not appear to degrade with $q$ is correct. The reason is not the two-step analysis but rather the tighter \\\"Renyi log-Sobolev inequality\\\" they use. Their Lemma 3 has a $\\\\partial R_\\\\alpha/\\\\partial\\\\alpha$ appearing which is the derivative of the Renyi divergence with respect to the order. As Renyi divergence is increasing, this term is positive and can be ignored. By disregarding it, we obtain our result. When including it in the analysis, Chourasia et al obtain the equivalent of our Theorem 6 *but with a changing Renyi order per iteration* (see their equation 66-67). We chose not to operate with changing Renyi orders as it quickly becomes notationally heavy but their tightening can be applied to our result with a similar PDE argument (their lines 59-61).\\n\\n\\n\\n4. `dimension dependence` Going from the strongly convex setting section 5.1 to the non-convex setting incurs a dimension dependence. This worst-case dimension dependence in noisy iterative schemes is expected in non-convex settings (see discussion around 4.3 in [B]). However, we *mitigate for this poor dimension dependence in Section 6*. \\n\\n5. `a lower bound on how the KL or R\\u00e9nyi divergence changes with iterations seems doable` Lower bounds in non-convex settings are notoriously difficult to construct. In strongly convex settings, the Gaussian serves to establish lower bounds, in the non-convex setting, the KL divergence between even the simplest non-convex distribution, a mixture of Gaussians, is not analytically tractable. In the space of noisy iterative algorithms, lower bounds are a major open problem (see page 3 of [C]).\"}", "{\"comment\": \"`Ideas in section 6`\\n\\nIn section 5, we use a per-iterate-LSI to obtain a decaying factor. When all we have is the dissipative assumption, the per-iterate-LSI is set by the worst-case dissipative function. This is where the worst-case exponential dependence enters. This is the cost we must pay to have a bound where each constant involved relates to stability and it is the cost of matching the strongly convex bound of (Chourasia et al 2021).\\n\\nIn section 6, we accept other constants in the bound namely, the convergence speed of the algorithm. With this allowance, we can use the LSI of the target instead of a per-iterate LSI. Here the constants are no longer set by the worst dissipative function, but by the specific $F_n$ we are optimizing. For this benefit, we must introduce terms polynomial in dimension in our bounds, we also need the algorithms to converge in Wassertein 2, which sets the stepsize in Corollary 20.1.\\n\\n`Techniques in Section 6`:\\n\\nThe main technical tool is Theorem 18. There we show an approximate contraction that can replace Theorem 6 which requires a per-iterate-LSI. To obtain this approximate contraction, explained roughly, we must change the second argument in the divergence (the b in KL(a||b)) from the distribution of the iterates to the distribution of the target. This change of measure is not generally possible.\\n\\nThe simplification of the expansion-contraction plays a significant role and will allow us to perform the change of measure. We need the half-noises to play two roles. The first noise plays a smoothing role. It guarantees that the hessian is lower bounded. This smoothness property allows us to do a change of measure in Lemma 18, and instead of the iterate LSI, we can use the LSI of the target. \\n\\nTo summarize, section 5 shows\\n* A per-iterate-LSI. This is a valuable contribution as it solves a curious quirk: it was unknown for example if discrete langevin iterates initialized at a Gaussian targetting a mixture of two Gaussians had a bounded LSI constant when both the initialization and the target had a finite one.\\n* a time-independent bound which matches the form of the strongly convex bound.\\n\\nThen, because of the dimension dependence of the bound, we propose section 6 where\\n* A dependence on the target LSI instead of a per-iterate-LSI is achieved at the cost of introducing constants that do not appear in the strongly convex case.\\n* A change of measure argument is shown thanks to the half-step technique. Fundamentally it relies on introducing well-chosen couplings on the RHS of Lemma 18 to make Wassertein distances appear.\\n\\n\\nWe are grateful for the time you took evaluating our work, we remain at your disposal for any further clarifications we could provide. If we have addressed your concerns we kindly ask you to consider raising your score.\\n\\nReferences\\n---\\n[B] [Raginsky, Rakhlin, Telgarsky. \\\"Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis.\\\" Conference on Learning Theory. PMLR, 2017](https://arxiv.org/abs/1702.03849).\\n\\n[C] [Chewi, Sinho, et al. \\\"Fisher information lower bounds for sampling.\\\" arXiv preprint arXiv:2210.02482 (2022).](https://arxiv.org/abs/2210.02482)\\n\\n[D] [Wang, Hao, Rui Gao, and Flavio P. Calmon. \\\"Generalization bounds for noisy iterative algorithms using properties of additive noise channels.\\\" Journal of machine learning research 24.26 (2023): 1-43.](https://www.jmlr.org/papers/v24/21-1396.html)\"}", "{\"summary\": \"The paper explores KL and R\\u00e9nyi divergence stability of Stochastic Gradient Langevin Dynamics (SGLD) algorithm. The main characteristic of the presented stability bounds is that they do not become vacuous with the number of iteration of SGLD, which is achieved by assuming log-Sobolev type isoperimetric inequality being satisfied, either throughout the stochastic process, or just by the steady-state Gibbs distribution that SGLD asymptotically approximates. Such isoperimetric properties have also been recently shown to provide rapid convergence in informational divergence as well as convergent DP properties. In a similar vein, the paper derives non-asymptotic and convergent generalization bounds for SGLD as well as bounds on R\\u00e9nyi DP under isoperimetric assumptions. Moreover, the paper shows that the isoperimetric assumption is satisfied under settings considerably milder than strongly-convex losses, such as under dissipative and smooth losses.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and explains its ideas in a well-paced manner, introducing a bit of background, assumptions, and supporting theorems at convenient locations for the reader to follow.\", \"The paper presents an example where non-convex loss can provide generalization guarantees that are non-vacuous in number of iterations.\", \"The paper simplifies the expansive-contractive decomposition of SGLD steps used in related works for bounding information divergence.\"], \"weaknesses\": [\"Non-analytical issues:\", \"There is no comparison of the presented bounds with existing results in literature. The generalization bounds presented should be compared to both information-theoretic and non-information theoretic bounds under similar sets of assumptions.\", \"Contrasting with lower bounds on stability is needed to assess gaps in the tightness of the analysis presented. If such an analysis proves difficult, a well-designed experimental evaluation to compare the generalization bounds with the actual generalization behaviour under the stated assumptions should have been included.\"], \"technical_comments\": \"- Firstly, information-theoretic generalization bounds inspired by Xu and Raginsky seem to have an O(1/sqrt(n)) dependence on the dataset size, even in cases where other generalization approaches give a better O(1/n) bounds [1]. Since the bounds presented in this paper show dependence in the dataset size n only through Lemma 2 (by Xu and Raginsky), I believe the paper's generalization guarantees might have suboptimal dependence on n under the assumptions made.\\n\\n- Lemma 3 for conversion of R\\u00e9nyi DP to $(\\\\epsilon, \\\\delta)$-DP isn't the best known bound. [Theorem 21, 2] gives a strict improvement which is the best known conversion in my knowledge.\\n\\n- While Theorem 7 neatly presents the change in R\\u00e9nyi divergence under LSI after a single SGLD step, I believe this inequality might be loose, specially in the dependence on the order $q$ of R\\u00e9nyi divergence. That's because the paper slightly modifies the expansion-contraction template used in other prior works for simplicity. In [3] the expansion-contraction step seems to occur simultaneously, which yield a PDE that is better able to quantify the change in R\\u00e9nyi divergence when integrated over a single step.\\n\\n- In Section 5.1, the constant of LSI under convexity is dimension independent. But on relaxing strong convexity to dissipativity, the LSI constant has an exponential dependence O(e^d) on the dimension size. The paper further claims in line 418 that this dependence on dimension can't be improved without additional assumptions. To me, this seems like a major hurdle that greatly limits the applicability of the generalization bounds presented (both Corollary 14.1 and 15.1) as plugging in the $C_{LSI}$ constant of Theorem 12 gives an $KL(X_t\\\\Vert X'_t) = O(e^d)$ dependence on dimension $d$. \\n\\n\\n[1] Haghifam, Mahdi, et al. \\\"Limitations of information-theoretic generalization bounds for gradient descent methods in stochastic convex optimization.\\\" International Conference on Algorithmic Learning Theory. PMLR, 2023.\\n\\n[2] Balle, Borja, et al. \\\"Hypothesis testing interpretations and renyi differential privacy.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2020.\\n\\n[3] Chourasia, Rishav, Jiayuan Ye, and Reza Shokri. \\\"Differential privacy dynamics of langevin diffusion and noisy gradient descent.\\\" Advances in Neural Information Processing Systems 34 (2021): 14771-14781.\\n\\nCrafting examples of loss functions satisfying the assumptions made and computing a lower bound on how the KL or R\\u00e9nyi divergence changes with iterations seems doable.\", \"questions\": \"I'm not sure if I understood the results in Section 6, which seems to be adopting an entirely different style of analysis as compared to section 5, which helps in lifting the LSI assumption on the entire sequence of intermediate distributions to LSI on the Gibbs distribution corresponding to the loss function. It would help if this approach is explained more thoroughly to see the idea in there a bit more clearly.\\n\\nI'm open to increasing my score, especially if Section 6 has some good ideas that I might have missed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer 4Kqp,\\n\\nWe are grateful for your thorough review and feedback. We are encouraged by the strengths you see in our work. In the following, we would like to answer some of your questions.\\n\\n(W1) `Assumption 15` For our privacy result, we need a bounded sensitivity assumption because the Renyi divergence introduces expectations that are rather difficult to control. Assumption 15 is standard in the privacy literature (see Def 2.10 in [A]) and it holds for the logistic loss over bounded data or any regularized Lipschitz loss.\", \"questions\": \"(1) `In the second paragraph under contributions.` Thank you for spotting this typo.\\n\\n(2) `In Lemma 2, is the constant the same one from Assumption 1?` Yes the constant is the same and we have fixed the theorem statement.\\n\\n(3) `Some intuitive explanations about the half-step technique`: The expansion is done by the gradient step and the contraction is done by the noise. Conditioned on the current step, the KL divergence measuring the gradient step alone would be infinity as it would correspond to the KL divergence between two Dirac distributions. However, if we split the noise in two and add some Gaussian noise to the gradient step, the KL divergence becomes finite and easy to analyze. This is the main reason why we split the noise. A secondary reason is the application of Lemma 17: we need a small amount of Gaussian noise to smoothe the distribution before analyzing the contraction.\\n\\n(4) `Assumption 14`. Indeed you are correct, we borrow the assumption from prior work [B, Assumption 3.1] where there is a $\\\\|z - z'\\\\|$ appearing on the RHS. We then absorbed $\\\\|z - z'\\\\|$ into $\\\\theta$ and $D$ by assuming a bounded data distribution. We will add clarifications.\\n\\nWe thank you for the time you took to review our paper and remain at your disposal for any clarifications you may need. \\n\\nReferences\\n---\\n[A] [Bok, Jinho, Weijie Su, and Jason M. Altschuler. \\\"Shifted Interpolation for Differential Privacy.\\\" arXiv preprint arXiv:2403.00278 (2024).](https://arxiv.org/abs/2403.00278)\\n\\n[B] [Zhu, Lingjiong, et al. \\\"Uniform-in-time Wasserstein stability bounds for (noisy) stochastic gradient descent.\\\" Advances in Neural Information Processing Systems 36 (2024).](https://proceedings.neurips.cc/paper_files/paper/2023/hash/05d6b5b6901fb57d2c287e1d3ce6d63c-Abstract-Conference.html)\"}", "{\"summary\": \"This paper studies the stability of SGLD, which implies generalization and differential privacy guarantees of SGLD. Instead of assuming strong convexity of the loss function, the authors demonstrate that stability results still hold under the dissipativity assumption. Technically, their result is established via verify the uniform LSI of SGLD outputs. Beyond the dissipativity assumption, they also establish a stability result via utilizing the regularizing properties of Gaussian convolution.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Clear statement of setup and theoretical results.\\n\\n2. Detailed proof with several illustrations of proof steps via pictures.\\n\\n3. Previous results are clearly mentioned with detailed references.\\n\\n4. The results in this paper extend previous findings under convexity to weaker conditions, which is an important improvement.\", \"weaknesses\": \"The writing in some parts is confusing, making it difficult to clearly understand the contribution in Section 6:\\n\\n1. In line 199, the authors state, \\\"we assume in the following that the Gibbs distribution with density proportional to exp(-Fn) satisfies the LS,\\\" but in Assumption 19, the authors seem to state this LSI assumption again. Is there any difference?\\n\\n2. In Section 6.1, the authors seem to claim two important preliminary results in Lemma 16 and 17 but don't explain how they affect establishing the main result.\\n\\n3. It seems that the results in Section 6 are established without verifying the uniform LSI. If so,I am wondering if the analysis template in Section 4 is only applied in Section 5 and whether it should be merged with Section 5. Moreover what is the main proof framework for establishing results in Section 6?\\n\\n\\nOther minor writing problems\\n\\n1. In line 90, should it be \\\"the bound does not decay to zero\\\"?\\n\\n2. In lines 439, 452, 874, \\\"given in Theorem 12.\\\"\\n\\n3. In line 504, \\\"given in equation 8 and equation 9.\\\"\", \"questions\": \"My main questions are about Section 6, as stated in the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer J6MA,\\n\\nWe are grateful for your thorough review and your feedback. We would like to address the points you have raised.\\n\\n\\nW. 1. `Assumption 19 and LSI` They are the same assumption. We made a typographic choice on our part to have theorems preceded by the assumptions they use. It is for this reason that we restate the LSI assumption formally in Assumption 19. Note that dissipativity is a stronger assumption than Assumption 19 (see Figure 2). So in our analysis, Section 5 operates with the stronger dissipativity assumption and Section 6 operates under the relaxed Assumption 19.\\n\\nW. 2&3. `Section 6`. We hear the reviewer's feedback and we have rewritten the introduction of Section 6 to make our contribution clear. Both sections 5 and 6 rely on the template in section 4. Indeed, A central component in the analysis is the step-wise contraction with a factor $\\\\gamma$ in Theorem 6. The existence of a contraction factor is what ensures a finite bound. In section 5, we analyze dissipative functions to show that a uniform LSI exists which gives the desired $\\\\gamma$ decay factor. In Section 6, we notice that instead of a clean contraction in Theorem 6, we can instead have \\n$$\\nD(current) \\\\leq \\\\gamma D(previous) + \\\\textrm{additive terms}\\n$$\\nwhere the additive terms are bounded. This is what we call approximate contraction: Theorem 18 has a similar contraction as in Theorem 6 but with added additive terms. By substituting Theorem 6 with Theorem 18, we obtain our KL generalization result. \\n\\n`what is the main proof framework`. The main technical contribution in section 6 lies in showing Theorem 18. Instead of taking a per-iterate LSI like in theorem 6, Theorem 18 only requires LSI of the target measure. To obtain this result, one needs to change, at some point in the analysis, the distribution with respect to which the KL divergence is being taken: from the distribution of the iterates to the distribution of the the target. To perform this change of distribution, we use Lemma 18. Observe there that an expectation under $Y$ on the left can be switched to an expectation under $X$ on the right. Lemma 18 however only applies to sufficiently smooth functions. This is precisely where Lemma 17 comes in. Lemma 17 shows that Gaussian convolution has a smoothing behavior which makes us able to apply Lemma 18. Combining these results with a continuous time analysis of Gaussian convolution yields Theorem 18.\\n\\n\\n\\n`Writing lines 90, 439, 504`: Thank you for spotting these mistakes.\\n\\nWe are grateful for your time and remain available for any further clarifications we could provide.\"}", "{\"comment\": \"Most of the authors comments are well received. I have raised my score accordingly. Here are the concerns that weren't sufficiently addressed.\\n\\n> \\\"For this reason, we cannot use generalization bounds that only depend on the function space nor can we use information-theoretic bounds which are not tractable when applied to noisy iterative algorithms.\\\"\\n\\nAuthors goal for the paper is to \\\"characterize algorithms that are run for a large number of iterations\\\" and the generalization bound presented has the desired $O(1)$ dependence on number of iteration. But since the dependence on the dataset size $n$ suffers (perhaps due to fundamental limitation of the analytical tools used, perhaps not), I believe it is important to acknowledge this gap in the paper.\\n\\n> \\\"Our dissipative bound, unlike previous results, matches the lower bound of (Chourasia et al 2021, Theorem 3), for \\n, as strongly convex functions are special cases of dissipative functions.\\\"\\n\\nIt seems to me like this is an incorrect assertion. The LSI constant in Theorem 12 hides a dependence on dimension d that does not appear in the lower bound lower bound of (Chourasia et al 2021, Theorem 3). The paper should acknowledge that the bounds presented under dissipativity might not be tight (at least in dimension d and in dataset size n).\"}", "{\"summary\": \"This paper studies generalization of stochastic gradient Langevin dynamics (SGLD) via information theoretic bound. The author(s) obtained Renyi stability by assuming the iterates verify the log-Sobolev inequality (LSI). The author(s) further showed that the LSI indeed is satisfied under some dissipativity condition. Further results are obtained when dissipativity is not available, in which case KL stability can still be achieved. The bounds are uniform-in-time which are strong. A by-product is that the paper shows that under dissipativity, all the iterates verify a uniform LSI, which was previously shown only in the strongly-convex setting, that resolves an open question in the literature.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) The paper is well written, and it is a very solid theoretical paper.\\n\\n(2) The bounds are uniform-in-time, obtained under Renyi divergence under dissipativity condition and KL stability without dissipativity.\\n\\n(3) A key ingredient in the proof is to show that under dissipativity, all the iterates verify a uniform LSI, which was previously shown only in the strongly-convex setting. This by-product resolves an open question in the literature.\", \"weaknesses\": \"(1) Assumption 15 seems to be a really strong assumption. It would be nice if the author(s) can comment on whether this assumption is needed because of the proof technique or it might be unavoidable.\\n\\n(2) As the author(s) mentioned in the conclusion section, the dimension dependence is strong. But since the author(s) are working with non-convex setting, this is understandable.\", \"questions\": \"(1) In the second paragraph under contributions, \\\"all the iterates of verify a uniform log-Sobolev inequality''. Should it be \\\"all the iterates verify a uniform log-Sobolev inequality''?\\n\\n(2) In Lemma 2, is the constant $c$ the same one from Assumption 1? If so, you should mention in the statement of Lemma 2 that you are assuming Assumption 1. For a related question, for each lemma and theorem, it would be really nice if the author(s) can make it more transparent which assumptions are used, especially because the paper contains quite many theoretical results in different settings which require different assumptions.\\n\\n(3) It would be nice if the author(s) can add some intuitive explanations about the half-step technique in the analysis. For example, when you split the Gaussian noise $N_{k+1}$ into $N_{k+1}^{(1)}$ and $N_{k+1}^{(2)}$, why the former becomes expansive, whereas the latter becomes contractive.\\n\\n(4) Assumption 14 seems to be a bit strange. If it is pseudo-Lipschitz, shouldn't it be small\\nwhen $z$ and $z'$ are close to each other but I do not see $z$ and $z'$ appearing on the right hand side.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The authors studied the generalization properties of SGLD under isoperimetry. From what I understand, the results essentially generalizes the results of Raginsky et al. (2017) to a KL based stability bound.\\n\\nAs many of the reviewers would agree, this line of work has been around for a while, and there is not a lot of significant contributions towards improving the critical issues here. In particular, any isoperimetric approach in a non-convex setting suffers from the curse of dimensionality, i.e. the constants depend exponentially on dimension. This is a critical issue as we will never be able to use these bounds in practice. \\n\\nLet's put aside this fundamental issue for a bit, most of the ingredients in this paper were not new to the reviewers or myself, nor were the results significant improvements. The claim of resolving the open question of Vempala and Wibisono (2019) is also overstated, as you require a stronger dissipativity assumption, and the uniform LSI constant is exponentially dependent on dimension once again. \\n\\nGiven that many reviewers believing the contributions are lacking and the above discussion, I will recommend reject.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer aiX5's discussion was the most productive, as it highlighted many of the issues I discussed above that remained unresolved. This line of messages was the most informative towards my decisions.\"}", "{\"comment\": \"Dear Reviewer aiX5,\\n\\nWe thank you for your comments and we have updated our manuscript according to your feedback. In particular, we have added a discussion in Appendix.A on the fast rate of $1/n$ and mentioned that it is an interesting open problem to both have no dependence on the iteration count _and_ have the optimal dependence on $n$.\", \"with_regards_to_the_lsi_constant_in_section_5\": \"The worst dissipative function, unlike the worst strongly convex function, does have an exponentially bad LSI constant. The dependence on $d$ is thus necessary and is due to the larger, non-convex, class of functions considered (see the hierarchy in Figure 2) and not a looseness in the analysis. We use the term \\\"match\\\" to refer to the fact that no additional terms are introduced. *Section 6 luckily does not depend on the worst dissipative function, but rather on the currently optimized function*.\\n\\nWe will add a clear sentence remarking on $n, d$ and the potential for future new techniques to tighten the bounds. We are again grateful for your time, and we remain available for any further clarifications we may provide.\"}", "{\"comment\": \"Dear Reviewer DPmq,\\n\\nWe sincerely thank the reviewer for their time and their review of our paper. \\n\\nWe would like to provide a clarification on the reviewer's summary:\\n\\n_The authors first show that [LSI] holds, an upper bound on the KL divergence can be derived. They further demonstrate that under [dissipativity] the distribution satisfies the LSI, thus leading to the desired bound._\\n\\nOur goal is to show a time-independent generalization bound in non-convex settings. In section 5, we assume the function is dissipative, this allows us to show a per-iterate LSI constant. With this result we are able to prove that noisy SGD applied on an unbounded non-convex loss generalizes (KL-stability)and that it is also differentially private (Renyi stability).\\nIn section 6, we do not assume dissipativity. We only assume that the target function satisfies the LSI. We derive a technique to establish KL stability without using per-iterate LSI, and only using the final target LSI. This enables us to avoid the poor dimension dependence at the cost of introducing some additional constants in the bound.\", \"weaknesses\": \"1. `Building on prior results`: We would like to stress that theorems 6 and 8 are foundational results. The citation for Theorem 8 for instance refers to Chafai's 2004 book. Theorem 12 which uses Chen et al 2021 result is only applicable _because_ of our preceding analysis. We are building on fundamental results to obtain theorems that are relevant, and our results are not immediate derivations.\\n2. `Assumption 15` For our privacy result, we need a bounded sensitivity assumption because the Renyi divergence introduces expectations that are rather difficult to control. Assumption 15 is standard in the privacy literature (see Def 2.10 in [C]) and it holds for the logistic loss over bounded data or any regularized Lipschitz loss.\\n3. `Dimension dependence`: As we are in a non-convex setting dimension dependence is unavoidable. We note this at the end of section 5 and propose section 6 to improve this dimension dependence. \\n\\nWe answer your questions in the following.\\n\\n1. `Isoperimetric vs. Log-Sobolev Inequality`: We followed the terminology introduced in Vempala and Wibisono [B] where functional inequalities like the LSI and the Poincare inequality are referred to as isoperimetric inequalities. This follows from the close equivalences that exist between isoperimetric inequalities and functional inequalities [A]. We will add clarifications.\\n\\n3. `What is \\\\Tilde{X}\\\\_k'` Thank you for spotting the typo; there is no tilde. It should simply be $X_k'$ . It is a modified expectation under $X_k'$ (see Definition 21). This is precisely where the difference between KL and Renyi divergences is most seen. For KL, the modified expectation is simple, for Renyi the modified expectation is more complex requiring stronger assumptions like Assumption 15.\\n4. `S_k`: You raise a good point. This quantity measures the gradient difference on two datasets sampled from the same distribution. It is referred to as the 'sensitivity' since it measures how sensitive the model is to changes in the dataset. As the batch size increases, the gradient estimates approach the population gradient thus making $S_k$ smaller. There is an inverse relationship between the batch size and $S_k$.\\n\\nWe thank you for the time you took to review our work. We remain at your disposal for any further clarifications you may need. If we addressed your concerns we kindly ask that you consider raising your score.\\n\\n---\\n[A] [Rothaus. \\\"Analytic inequalities, isoperimetric inequalities and logarithmic Sobolev inequalities.\\\" *Journal of functional analysis*(1985) ](https://www.sciencedirect.com/science/article/pii/0022123685900795)\\n\\n[B] [Vempala, Wibisono. \\\"Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices.\\\" Neurips (2019).](https://proceedings.neurips.cc/paper/2019/hash/65a99bb7a3115fdede20da98b08a370f-Abstract.html)\\n\\n[C] [Bok, Jinho, Weijie Su, and Jason M. Altschuler. \\\"Shifted Interpolation for Differential Privacy.\\\" arXiv preprint arXiv:2403.00278 (2024).](https://arxiv.org/abs/2403.00278)\"}", "{\"summary\": \"This paper addresses stochastic optimization, where the goal is to minimize $F(x):=E_{Z\\\\sim \\\\nu}f(x;Z)$ for some underlying distribution $\\\\nu$.\\nLet $D$ and $D'$ be two datasets, each consisting of $n$ i.i.d. samples from $\\\\nu$. Running noisy stochastic gradient descent (SGD) on these two datasets yields sequences $\\\\{X_k\\\\}$ and $\\\\{X_k'\\\\}$ respectively. It is known that the generalization error scales with the KL divergence between the distributions of $X_k$ and $X_k'$ .\\n\\nThis paper provides a time-independent upper bound on the KL divergence, even as $k\\\\to \\\\infty$. The authors first show that when the log-Sobolev inequality (LSI) holds, an upper bound on the KL divergence can be derived. They further demonstrate that under appropriate conditions, such as dissipativity, the distribution satisfies the LSI, thus leading to the desired bound.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The problem studied is interesting and has practical relevance, with clear motivation provided.\\nThe paper is well-structured, with different cases and scenarios analyzed in depth.\\nThe results are extensively studied across various settings.\", \"weaknesses\": \"1. The main proofs rely heavily on existing work, like Theorems 6, 8, and 12.\\n2. Some assumptions require further discussion. For instance, Assumption 15 seems restrictive in unbounded domains like $R^d$.\\n3. In Theorem 12, the LSI constant scales exponentially with the dimension, which could be problematic for high-dimensional settings.\", \"questions\": \"1. Isoperimetric vs. Log-Sobolev Inequality:\\nThe paper mentions the use of the isoperimetric inequality, but the arguments seem entirely based on the log-Sobolev inequality (LSI). In probability theory, the isoperimetric inequality is usually considered a separate concept. Could this be a typo or an imprecise reference?\\n\\n2. What is \\\\Tilde{X}_k' in Theorem 5? Is it a typo, and should it be S_k instead?\\n\\n3. The $S_k$ is not carefully discussed. In SGLD, when drawing a batch of size $b$, could using a smaller batch size lead to a tighter bound on the KL divergence?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
0UvlnHgaii
Toward Exploratory Inverse Constraint Inference with Generative Diffusion Verifiers
[ "Runyi Zhao", "Sheng Xu", "Bo Yue", "Guiliang Liu" ]
An important prerequisite for safe control is aligning the policy with the underlying constraints in the environment. In many real-world applications, due to the difficulty of manually specifying these constraints, existing works have proposed recovering constraints from expert demonstrations by solving the Inverse Constraint Learning (ICL) problem. However, ICL is inherently ill-posed, as multiple constraints can equivalently explain the experts' preferences, making the optimal solutions not uniquely identifiable. In this work, instead of focusing solely on a single constraint, we propose the novel approach of Exploratory ICL (ExICL). The goal of ExICL is to recover a diverse set of feasible constraints, thereby providing practitioners the flexibility to select the most appropriate constraint based on the practical needs of deployment. To achieve this goal, we design a generative diffusion verifier that guides the trajectory generation process using the probabilistic representation of an optimal constrained policy. By comparing these decisions with those made by expert agents, we can efficiently verify a candidate constraint. Driven by the verification feedback, ExICL implements an exploratory constraint update mechanism that strategically facilitates diversity within the collection of feasible constraints. Our empirical results demonstrate that ExICL can seamlessly and reliably generalize across different tasks and environments. The code is available at https://github.com/ZhaoRunyi/ExICL.
[ "Inverse Reinforcement Learning", "Generative Diffusion Model" ]
Accept (Poster)
https://openreview.net/pdf?id=0UvlnHgaii
https://openreview.net/forum?id=0UvlnHgaii
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yqXrBhEMJy", "yWirOprRl8", "qrVVQY8Blb", "pBo3hVhBOv", "kS2K4H1i0s", "hkcIEFttfY", "hSKEHKWqRB", "as6Lt6GfLR", "WJXfv755YI", "MGRdkUH8et", "Kx2pnpSBsc", "KjdF1E8va6", "DTbWvBCQQl", "D24CZDglm4", "CiRaWGNEdr", "Au2TWUJ81l", "9sw2mIE77l", "52eLNUV6mh", "3pJJgixSJH", "3IDKOhQAY5" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732263854439, 1732266110868, 1732497317014, 1734400571180, 1732775246855, 1732266661385, 1737523746620, 1732726782075, 1730668931729, 1730701035911, 1732261101156, 1730328123971, 1732591952169, 1732264231285, 1732262358521, 1732497679167, 1732432721246, 1732529376275, 1732529266353, 1732266510125 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6139/Authors" ], [ "ICLR.cc/2025/Conference/Submission6139/Authors" ], [ "ICLR.cc/2025/Conference/Submission6139/Reviewer_X5Jn" ], [ "ICLR.cc/2025/Conference/Submission6139/Area_Chair_oPsK" ], [ "ICLR.cc/2025/Conference/Submission6139/Authors" ], [ "ICLR.cc/2025/Conference/Submission6139/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6139/Reviewer_56R9" ], [ "ICLR.cc/2025/Conference/Submission6139/Reviewer_zyAA" ], [ "ICLR.cc/2025/Conference/Submission6139/Reviewer_X5Jn" ], [ "ICLR.cc/2025/Conference/Submission6139/Authors" ], [ "ICLR.cc/2025/Conference/Submission6139/Reviewer_56R9" ], [ "ICLR.cc/2025/Conference/Submission6139/Reviewer_zyAA" ], [ "ICLR.cc/2025/Conference/Submission6139/Authors" ], [ "ICLR.cc/2025/Conference/Submission6139/Authors" ], [ "ICLR.cc/2025/Conference/Submission6139/Reviewer_zyAA" ], [ "ICLR.cc/2025/Conference/Submission6139/Area_Chair_oPsK" ], [ "ICLR.cc/2025/Conference/Submission6139/Authors" ], [ "ICLR.cc/2025/Conference/Submission6139/Authors" ], [ "ICLR.cc/2025/Conference/Submission6139/Authors" ] ], "structured_content_str": [ "{\"title\": \"Author response to Reviewer zyAA - Part 1\", \"comment\": \"Dear Reviewer, we greatly appreciate your constructive comments. We have seriously considered your suggestions, and hopefully, the following response can address your concerns:\\n\\n&nbsp;*1. While the authors list computational concerns as one of the advantages Ex-ICL has over ICL, they do not conclusively show Ex-ICL's computational advantage. Figure 6 shows that Ex-ICL is more sample efficient in constraint inference, but a true test of computational efficiency should also take into account diffusion model training time.*\\n\\n**Response.** We thank you for this important question. In this work, we prioritize the data-efficiency over computational resource consumption as it's a major concern of Offline Reinforcement Learning, which has been demonstrated in [2]. Such prioritization is due to that dataset collection other than model training is more time- and resources-costly in the offline learning pipeline. As shown in Figure 6, our method outperforms the previous Offline Inverse Constraint Reinforcement Learning methods in terms of this crucial consideration.\\n\\nAs for diffusion model training, its data consumption has been taken into account. As supporting evidence, we refer you to the initial part of the Ex-ICL plot in the revised Figure 6 (within the orange ellipse). The consistent zero performance in this circled region indicates the training phase of the diffusion model and reward model.\\n\\n&nbsp;*2. 'The experiments on maze and Mujoco are comprehensive but are fairly simple. For example, the baseline paper [1] includes a more realistic experiment on traffic scenarios.'*\\n\\n**Response.** We appreciate you bringing this to our attention. To address this concern, we have conducted an additional experiment under the CommonRoad-RL environment with a velocity $<40$ constraint, following the experimental setup described in [1]. Using the same HighD expert dataset and suboptimal dataset for training as [1] ensures a fair comparison.\\n\\nAs shown in section B.3 of the Appendix in the revised manuscript, our method demonstrated superior performance in terms of cost reduction while achieving comparable reward and Area Under Curve (AUC) performance to [1]. This result shows the potential of our method for realistic autonomous driving tasks. \\n\\n&nbsp;*3. 'There's not enough detail in the main paper or appendix on methodology (how is $\\\\phi$ parameterized?)'*\\n\\n**Response.** Thanks for raising this important concern. We design a cost value model to model the feasibility $\\\\phi$ in trajectory level, mirroring the reward value model architecture in the official implementation of [2], which employs a U-Net architecture. Given a horizon $H$-length noisy trajectory $\\\\tau^i$ as input, the model predicts the feasibility $\\\\phi_{\\\\omega}(s_t, a_t)\\\\in[0,1]$ for each state-action pair $(s_t, a_t)$ within the trajectory, generating a feasibility vector of length $H$. Based on these $\\\\phi_{\\\\omega}(s_t, a_t)$s, we compute the cost value $V_c = \\\\sum_{t=0}^{H} \\\\gamma^t c_{\\\\omega}(s^i_t, a^i_t, i) = \\\\sum_{t=0}^{H} \\\\gamma^t (-\\\\log \\\\phi_{\\\\omega}(s^i_t, a^i_t, i))$ for guiding the diffusion verifier. Note that the diffusion timestep, $i$, is provided as an explicit input to the network and embedded via an MLP to capture the timestep of the denoising process. Details of the cost value model architecture and hyperparameters are now included in Section A.4 and Table 3 of the Appendix in the revised manuscript.\\n\\n&nbsp;*4. 'How are you selecting the constraint out of the constraint pool discovered by Ex-ICL for the experiment section?'*\\n\\n**Response.** Thank you for raising this important point.\", \"our_experiments_can_be_split_into_3_parts\": \"\\\"5.1 Control Performance\\\", \\\"5.2 Exploratory Performance\\\", and \\\"5.3 Learning Efficiency\\\".\\n\\n1) For control performance (section 5.1), we selected the constraints discovered under the largest $\\\\delta$. Since $\\\\delta$ controls the level of regularization on sparsity, our setting is to align with the previous setting of ICRL solvers that favor the sparsity of constraints, such as Section 5.1 of [1], thereby providing a fair comparison with previous works.\\n\\n2) For exploratory performance (Section 5.2), all the constraints stored in the constraint pool are included in the evaluation, with which we illustrated the diversity of learned constraints in Figure 4, 7, 8, and 9.\\n\\n3) In terms of learning efficiency (Section 5.3), we illustrate the sample complexity in discovering the first valid constraint with EX-ICL. This is to offer a fair comparison with previous work that study only one constraint.\\n\\nThese clarifications have been incorporated into the revised manuscript.\"}", "{\"title\": \"Author response to Reviewer 56R9 - Part 1\", \"comment\": \"&nbsp;*1. how is reward treated? Is a separate reward model that is (1) differentiable, and (2) conditioned on diffusion time (i in the author's notation) trained following Janner et al? These details are not present in Alg. 1, but are necessary to evaluate the gradient $p_\\\\mathcal{M}^c$ in eqns (9) and (10).*\\n\\n**Response.** Thank you for pointing out this important detail. Yes, our reward model follows the design of [1] Janner et al., employing a U-Net based neural network architecture. This network takes as input a horizon $H$-length noisy trajectory and the diffusion timestep, $i$, to predict the corresponding reward value. Its training is supervised, using noisy trajectories and their associated rewards. As a differentiable neural network, it provides the necessary gradients for classifier-free guidance of guiding the generative diffusion verifier. This design choice and its details are now clearly explained in Section A.4 of the Appendix in the revised manuscript.\\n\\n&nbsp;*2. 'It is also not made clear whether in (9) and (10) the feasibility functions and reward are made to condition on diffusion time i, as I would expect it should since only $\\\\tau_i$ is available at i.'*\\n\\n**Response.** Thanks for pointing out this concern. You are correct; both the feasibility functions and the reward model are conditioned on the diffusion timestep, $i$. This conditioning is essential because these models must be aware of the current stage of the denoising process to effectively guide the generative diffusion model. In our implementation, both the reward model and the feasibility model process the input diffusion timestep, $i$, using a Multilayer Perceptron (MLP) layer. The output of this MLP is then broadcast-added to the input trajectory, following the approach described in [1] Janner et al. This detail has been added to Section A.4 of the Appendix in the revised manuscript.\\n\\n&nbsp;*3. 'After algorithm 1 completes, how are constraints chosen by the practitioner as the abstract says?'*\\n\\n**Response.** Thanks for raising this important point. As shown in Algorithm 1, each discovered constraint is conditioned on a specific regularization parameter $\\\\delta$. This parameter directly controls the level of sparsity of the discovered constraint, with larger values of $\\\\delta$ encouraging the feasibility $\\\\phi$ to be closer to 1. The choice of constraint is thus directly influenced by the practitioner's preference for sparsity. Specifically, a sparse constraint minimizes its impact on the reward-maximizing policy, as demonstrated in Section 5.1 of [3]. Conversely, a dense constraint offers stronger safety guarantees but at the cost of potentially more significant restrictions on the agent's actions and a reduction in the overall reward.\\n\\n&nbsp;*4. 'How do the authors choose what constraints they apply when sampling their final evaluations? This is stated in the abstract but is not discussed in the paper at all.'* \\n\\n**Response.** We appreciate you pointing out this important concern.\\n\\nOur experiments evaluate \\\"Control Performance\\\" in Section 5.1, \\\"Exploratory Performance\\\" in Section 5.2, and \\\"Learning Efficiency\\\" in Section 5.3. The respective choice of constraint in each experiment is that:\\n\\n1) For control performance (section 5.1), we selected the constraints discovered under the largest $\\\\delta$. Since $\\\\delta$ controls the level of regularization on sparsity, our setting is to align with the previous setting of ICRL solvers that favor the sparsity of constraints, such as Section 5.1 of [1], thereby providing a fair comparison with previous works.\\n\\n2) For exploratory performance (Section 5.2), all the constraints stored in the constraint pool are included in the evaluation, with which we illustrated the diversity of learned constraints in Figure 4, 7, 8, and 9.\\n\\n3) In terms of learning efficiency (Section 5.3), we illustrate the sample complexity in discovering the first valid constraint with EX-ICL. This is to offer a fair comparison with previous work that study only one constraint.\\n\\nWe have clarified these in the revised version of our paper.\\n\\n&nbsp;*5. 'How is constrained data collected? Is there an expert that already includes the constraint?'*\\n\\n**Response.** We appreciate you raising this consideration. The constraint-satisfying data is generated by an expert policy trained under a ground-truth constraint. This approach is consistent with many previous ICRL studies [2]. However, due to the ill-posed nature of the Inverse Constrained Reinforcement Learning (ICRL) problem, the ground-truth constraint is not uniquely identifiable from expert demonstrations alone. Therefore, our method employs exploration to identify a feasible set of constraints, providing flexibility and efficiency to constraint inference.\"}", "{\"comment\": \"Thank you for the response and I will raise my score to 6 after reading this.\"}", "{\"metareview\": \"This paper uses Diffusion models to develop new algorithms for in context reinforcement learning. While the sub-techniques are not novel, and there are no new theoretical contributions, authors generally appreciated the synthesis of existing ideas into a new application. Experiments showed promise, but reviewers felt that the domains could have been extensive, and had concerns that the proposed method did not uniformly outperform state of art. A more extensive set of experiments would be appreciated.\\n\\nOverall, this paper is a nice application of existing ideas, and all three reviewers are okay with acceptance. However, no single reviewer feels incredibly strongly given the limitations mentioned above. Hence, I lean towards acceptance as a poster.\", \"additional_comments_on_reviewer_discussion\": \"No reviewers were willing to champion the paper. One reviewer in particular said that they found the idea interesting, but would have liked to have since more extensive and compelling experimental results.\"}", "{\"title\": \"Author response to Reviewer 56R9 - Part 3\", \"comment\": \"We are delighted by your positive assessment of our work and are most grateful for your thoughtful and thorough review of our manuscript.Your insightful comments have been invaluable in improving the clarity and precision of our work. We appreciate the significant time and effort dedicated to this process and welcome the opportunity to address your suggestions. Thank you very much!\"}", "{\"title\": \"Summary of Updates and Global Responses\", \"comment\": \"Dear Reviewers, Area Chairs, and Program Chairs,\\n\\nWe sincerely appreciate your valuable feedback and insightful guidance. Your comments have been instrumental in significantly improving our work. In response, we have incorporated detailed clarifications, expanded explanations, additional experimental results, and improved figures into our revised manuscript (changes are highlighted in blue). A summary of the major updates is provided below:\\n\\n1. More Details to Cost Value Model: Addressing the suggestions of Reviewers X5Jn and zyAA, we have clarified the U-Net-based architecture, the use of noisy trajectory inputs, the generation of pair-wise feasibility and trajectory cost value outputs, and the noise-robust nature of the model for guiding the denoising process.\\n\\n2. Clarification on Constraints Selection: As requested by Reviewers zyAA and 56R9, we have detailed the $\\\\delta$-based constraint selection process used in our experiments and further elaborated on the relationship between sparsity, the regularization parameter $\\\\delta$, and its role in practical constraint selection.\\n\\n3. Adding Experiment under Realistic Environment: Following the suggestion of Reviewer zyAA, we conducted additional experiments in a realistic autonomous driving environment to showcase our method's ability to handle complex real-world tasks.\\n\\n4. More Experiments and Analysis on Cost Model's Performance: Addressing Reviewer zyAA\\u2019s suggestion, we have included additional experimental results illustrating the changing trend of cost model predictions as a function of the number of exploration rounds. In addition, to address the questions of Reviewer 56R9, we have provided a more in-depth analysis of the reasons behind the large variance observed in the exploratory cost models' predictions and the varying performance gap between our method and the baselines.\\n\\n5. In-depth Explanation of Setting and Theory: Responding to Reviewer X5Jn's questions, we have provided a comprehensive explanation of the cost and reward definitions within our framework and detailed the duality optimization theory underlying our method's design.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"I apologize for the tardiness of my response.\\n\\nThe authors have satisfactorily answered my questions. \\n\\nIt seems that the ICLR system does not allow for a score between 6 and 8, so I'll stick with my original score.\"}", "{\"summary\": \"The paper tackles the safe reinforcement learning problem using a diffusion model and guidance to train a set of feasibility functions. Unlike traditional inverse constraint learning, which is difficult to verify whether a candidate constraint is feasible and returns a single constraint, the paper's algorithm rapidly recovers a diverse set of constraints once the diffusion model is trained on expert data. The paper's algorithm outperforms baselines on constrained mazes and Mujoco experiments regarding performance and sample efficiency.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The idea of amortizing the ICL loop cost by pre-training a diffusion model is interesting.\", \"The paper provides convincing empirical results that show the superiority of their method compared to the baselines of their experiments, both for reward and cost. It also investigates how reliable feasibility functions are on expert non and non-expert data.\", \"While they have not directly demonstrated the advantages of having multiple constraint candidates returned by the algorithm (aside from possibly making search more efficient), this seems like a practical feature to have for real world use cases.\"], \"weaknesses\": [\"While the authors list computational concerns as one of the advantages Ex-ICL has over ICL, they do not conclusively show Ex-ICL's computational advantage. Figure 6 shows that Ex-ICL is more sample efficient in constraint inference, but a true test of computational efficiency should also take into account diffusion model training time.\", \"The experiments on maze and Mujoco are comprehensive but are fairly simple. For example, the baseline paper [1] includes a more realistic experiment on traffic scenarios.\", \"There's not enough detail in the main paper or appendix on methodology (how is \\\\phi parameterized?)\", \"[1] Guorui Quan, Zhiqiang Xu, & Guiliang Liu (2024). Learning Constraints from Offline Demonstrations via Superior Distribution Correction Estimation. In Forty-first International Conference on Machine Learning.\"], \"questions\": [\"How are you selecting the constraint out of the constraint pool discovered by Ex-ICL for the experiment section?\", \"Why does Figure 4's Ex-ICL figure have so much larger variance for bad trajectory cost value than other methods?\", \"How sensitive are the results to exploration coefficient \\\\delta and exploration round m? Also, would it be instructive to showcase model performance for Ex-ICL that only searches over a single \\\\delta?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes ExICL to tackle Inverse Constraint Learning problem, which aims to recover a diverse set of feasible constraints through an exploratory constraint update mechanism. The designed generative diffusion verifier utilizes the guided sampling strategy to verify the feasibility of explored constraints. This paper also aims to guarantee the robustness of feasible constraints discovery by accurately estimating the cost of noisy trajectory.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Introduction clearly states the current issues in Inverse Constraint Learning and the related works section is complete.\\n2. The experiments are comprehensive, demonstrating the effectiveness of the proposed approach.\", \"weaknesses\": \"1. The contributions claimed in this paper are not apparent to me. Contents in 4.1 is quite close to what has been proposed in [1], and the non-convex objective theorem is inherited from [2]; the ambiguity of how things are defined in section 4.2, 4.3 impairs the significance of contributions again. There are many math notations are not defined or briefly mention. I will list each of them below in the question section. I found it confusing and hard to see how the idea works.\\n2. Again, theorem 4.1 seems related to some existing conclusion from Paternain's paper [2], and this theorem is critical as it supports the zero duality gap for non-convex objective. The theorem stated in this paper is not quite the same as what is shown in [2], as the constraints here are not constant but are functions, but constants in [2]. There is supposed to be a connection shown here to support the theorem or a direct proof. A typo follows the theorem in Equation (9): $\\\\lambda\\\\epsilon$ might be missing at the end in the exponential term.\\n\\n[1] Janner, Michael, et al. \\\"Planning with Diffusion for Flexible Behavior Synthesis.\\\" International Conference on Machine Learning. PMLR, 2022.\\n[2] Paternain, Santiago, et al. \\\"Constrained reinforcement learning has zero duality gap.\\\" Advances in Neural Information Processing Systems 32 (2019).\", \"questions\": \"1. My biggest confusion is about how the reward and cost are defined, respectively. Usually reward is defined as the negative cost if cost is positive, but in this paper, it seems not. Can you explicitly show how they are defined and how different they are?\\n2. In section 4.2, on line 286, how is $\\\\phi_\\\\omega(s_t^i, a_t^i, i)$ defined? \\n3. In section 4.3, can you explicitly give the expressions for dist$[1, \\\\phi_\\\\omega(s_t, a_t)$ and dist$[\\\\tilde\\\\phi_\\\\omega(s_t, a_t), \\\\phi_\\\\omega(s_t, a_t)])$?\\n4. In algorithm 1, ``Updating $\\\\lambda$ by minimizing the loss $\\\\mathcal{L} = \\\\lambda \\\\mathbb{E}_{\\\\hat\\\\tau\\\\sim \\\\tilde{p}_M}[c(\\\\tau) - \\\\epsilon]$, why is no reward term involved here to update $\\\\lambda$? Another question related to this in Table 2: there is a significant discrepancy between the magnitudes of the Reward and Cost. Could you provide some insight into this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response to Reviewer X5Jn - Part 1\", \"comment\": \"Dear Reviewer X5Jn,\\n\\nWe sincerely value your time and effort in evaluating our work. We have prepared comprehensive responses and clarifications to address each point you raised. We hope these responses can resolve your concerns.\\n\\n&nbsp;*1. 'The contributions claimed in this paper are not apparent to me. Contents in 4.1 is quite close to what has been proposed in [1], and the non-convex objective theorem is inherited from [2]'*\\n\\n**Response.** Sorry for raising the confusion. We must first clarify that the core contribution of our research lies in the development of a novel method for learning a diverse set of feasible constraints from offline demonstration data. While certain constituent techniques may bear resemblance to prior work, their integration and application towards this specific objective distinguishes our approach.\\n\\nAlthough our generative diffusion verifier draws inspiration from [1], as noted in Section 4.1, several significant distinctions exist. Firstly, our utilization of the diffusion planner is fundamentally different. In our work, the diffusion planner serves the purpose of verifying learned constraints, rather than merely controlling the agent as in [1]. Secondly, our implementation incorporates a novel guidance mechanism utilizing an optimal probabilistic representation of the constrained policy model. This modification is substantiated by the findings presented in [2]. Critically, the subsequent stages of our methodology, namely the noise-robust constraint update and strategic exploration techniques, are entirely novel and unrelated to either [1] or [2]. These components are crucial to the success of our method in learning a diverse and robust set of constraints from offline data.\\n\\n&nbsp;*2. 'Again, theorem 4.1 seems related to some existing conclusion from Paternain's paper [2], and this theorem is critical as it supports the zero duality gap for non-convex objective. The theorem stated in this paper is not quite the same as what is shown in [2], as the constraints here are not constant but are functions, but constants in [2]. There is supposed to be a connection shown here to support the theorem or a direct proof. A typo follows the theorem in Equation (9): $\\\\lambda\\\\epsilon$ might be missing at the end in the exponential term.'*\\n\\n**Response.** We appreciate your careful reading of our manuscript and thank you for pointing out this important clarification. By \\\"constants in [2]\\\", we assume the reviewer indicates the cost $c(s_t, a_t)$ in [2] as a constant function given by the environment, providing the same output for identical inputs. In contrast, within our Inverse Constrained Reinforcement Learning (ICRL) framework, $c(s_t, a_t)$ is a time-varying function whose parameters are subject to updates.\\n\\nHowever, our ICRL training process is structured in two phases. The first phase focuses on constraint discovery, involving parameter updates for the cost model $c$. Subsequently, a Constrained Reinforcement Learning (CRL) phase is executed, updating the policy to comply with the discovered constraints. During this CRL phase, discovered constraints $c(s_t, a_t)$ behave as a constant function as its parameters are frozen. Therefore, Theorem 1 of [2], and its associated zero duality gap guarantee, remain valid within our framework. Algorithm 1 provides detailed information on this two-phase training process.\\n\\nFinally, we thank you for identifying the typographical error; this has been corrected in the revised manuscript.\\n\\n&nbsp; 3. *'My biggest confusion is about how the reward and cost are defined, respectively. Usually reward is defined as the negative cost if cost is positive, but in this paper, it seems not.'*\\n\\n**Response.** We thank you for your insightful question regarding the relationship between reward and cost in our work. It is important to clarify that the reward function in our framework is not simply defined as the negative cost. Since we address a Constrained Markov Decision Process (CMDP) problem, rewards and costs serve distinct roles in the policy optimization process. Specifically, agents are trained to maximize cumulative rewards. However, rather than directly minimizing cumulative costs, agents are trained to maintain cost values below a defined threshold, $\\\\epsilon$. Section 3 provides a detailed explanation of the definitions of both reward and cost within our CMDP setting.\\n\\nIf the reviewer is referring to [2], we wish to emphasize that the cost $c(s_t, a_t)$ in our work is functionally equivalent to $r_i(s_t, a_t)$ in [2]. In both cases, these functions are set to respect a cost/reward value threshold. While the cost value $\\\\sum\\\\limits^T_{t=0}\\\\gamma^tc(s_t,a_t)$ is set to be smaller than $\\\\epsilon$, the $i$-th constrained $\\\\sum\\\\limits^T_{t=0}\\\\gamma^tr_i(s_t,a_t)$ is set to be larger than $c_i$.\"}", "{\"summary\": \"The authors consider inverse constraint learning, and improve on previous work by constructing an algorithm that can generate a set of constraints, and verify those constraints by applying techniques developed in diffusion modelling for RL. In particular, the authors construct a guidance term that is the gradient of a set of feasiblity terms, which they can use for on the fly verification of the proposed feasilibity functions, thereby eliminating a costly second optimization loop. The authors test their proposed algorithm on a variety of RL benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The technique makes clever use of the advanatages found in diffusion techniques: being able to modify the policy at run time by applying guidance terms\", \"The paper strikes a good balance of building a new method out of existing elements.\"], \"weaknesses\": [\"The authors seem to omit some details of their mechanism, which I think are quite crucial to the paper. These are:\", \"how is reward treated? Is a separate reward model that is (1) differentiable, and (2) conditioned on diffusion time (i in the author's notation) trained following Janner et al? These details are not present in Alg. 1, but are necessary to evaluate the gradient p_Mc in eqns (9) and (10).\", \"It is also not made clear whether in (9) and (10) the feasibility functions and reward are made to condition on diffusion time i, as I would expect it should since only tau_i is available at i.\", \"After algorithm 1 is complete, how is the final policy constructed for the experiments? Perhaps this is as simple as running eqn. (9) and (10) a final time, but this is not specified either.\", \"After algorithm 1 completes, how are constraints chosen by the practitioner as the abstract says? How do the authors choose what constraints they apply when sampling their final evaluations? This is stated in the abstract but is not discussed in the paper at all.\", \"How is constrained data collected? Is there an expert that already includes the constraint?\"], \"minor\": [\"A few scattered grammar errors could be addressed\"], \"questions\": [\"See also questions under \\\"weaknesses\\\"\", \"Do the authors have some intuition why their method seems to outperform baselines significantly for HalfCheetah, marginally for Limited-Walker and only ties for Blocked-Ant?\", \"In the MuJoCo experiments, is the reward presented in Table 2 the feasible reward? I.e. are rewards truncated after a constraint has been violated? It seems that that would be the more inveresting metric to report, I would recommend the authors report that metric, and if they already do so make it clear it is that metric.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your informative reply.\"}", "{\"title\": \"Author response to Reviewer zyAA - Part 2\", \"comment\": \"&nbsp;*5. 'Why does Figure 4's Ex-ICL figure have so much larger variance for bad trajectory cost value than other methods?'*\\n\\n**Response.** We appreciate you raising this important point. In principle, to recover the constraint, ICRL algorithms increase the cost values of bad trajectories to be above the threshold $\\\\epsilon$. On the other hand, for the cost values of expert trajectories, ICRL algorithms must guarantee their values to be smaller than $\\\\epsilon$. So the scale of costs for bad trajectories is much larger than those of expert trajectories, which causes significantly higher variance for bad trajectory cost value. By implementing strategic exploration, our EX-ICL exploration strategy successfully identifies this diverse set of feasible cost models, causing the variance of predicted cost values to be higher. We have revised the paper accordingly.\\n\\n&nbsp;*6. 'How sensitive are the results to exploration coefficient $\\\\delta$ and exploration round m? Also, would it be instructive to showcase model performance for Ex-ICL that only searches over a single $\\\\delta$?'*\\n\\n**Response.** We appreciate your insightful question.To assess the sensitivity to $\\\\delta$, we conducted experiments across a range of exploration coefficients, the results of which are presented in Figures 4, 7, and 8. These figures illustrate the predicted cost values for models trained using different values of $\\\\delta$ under different environments. Specifically, the initial model was trained with the smallest $\\\\delta_0$. Five additional models were then trained using linearly increasing $\\\\delta$ values: $\\\\delta_i = \\\\delta_0 + (\\\\delta_5 - \\\\delta_0) \\\\times \\\\frac{i}{5}$.\\n\\nTo investigate the impact of the exploration rounds $m$, we have included a supplementary Figure 9 in Section B.1 of the revised paper. This figure shows the trend in cost values as the number of training rounds increases from 0 to $M$ using the largest value of $\\\\delta$. This supplementary figure supports our claim that the difference in cost values between the initial model and the trained models grows significantly with increasing training rounds.\\n\\n**References**\\n\\n[1] Guorui Quan, Zhiqiang Xu, \\\\& Guiliang Liu (2024). Learning Constraints from Offline Demonstrations via Superior Distribution Correction Estimation. In Forty-first International Conference on Machine Learning.\\n\\n[2] Janner, Michael, et al. \\\"Planning with Diffusion for Flexible Behavior Synthesis.\\\" International Conference on Machine Learning. PMLR, 2022.\"}", "{\"title\": \"Author response to Reviewer X5Jn - Part 2\", \"comment\": \"&nbsp;*4. 'In section 4.2, on line 286, how is $\\\\phi(s^i_t, a^i_t, i)$ defined?'*\\n\\n**Response.** We appreciate your attention to detail. The notation $\\\\phi(s^i_t, a^i_t, i)$ represents the feasibility of a state-action pair $(s^i_t, a^i_t)$ within a noisy trajectory $\\\\tau^i$ generated during the $i$-th denoising step.\\n\\nTo elaborate, the feasibility of a state-action pair $(s_t, a_t)$ at planning timestep $t$ is denoted by $\\\\phi(s_t, a_t)$. However, this feasibility measure cannot be directly applied to guide the trajectory-level generation of diffusion verifier. As mirroring the approach in [1]'s official implementation, effective optimization of the diffusion verifier necessitates noise-robust reward and cost models capable of predicting values for noisy trajectories during the denoising process. Therefore, we explicitly incorporate the diffusion timestep $i$, resulting in the notation $\\\\phi(s^i_t, a^i_t, i)$ to denote the feasibility of the state-action pair on the noisy trajectory $\\\\tau^i$.\\n\\nThis aspect has been clarified with additional detail regarding cost value model in Appendix A.4 of the revised manuscript.\\n\\n&nbsp;*5. 'In section 4.3, can you explicitly give the expressions for $\\\\text{dist}(1, \\\\phi)$ and $\\\\text{dist}(\\\\tilde{\\\\phi}, \\\\phi)$?'*\\n\\n**Response.** Thank you for your question. In our work, the distance function $\\\\text{dist}$, is chosen to be the $l_1$-norm, as $\\\\phi_{\\\\omega}(s_t, a_t)$ is a scalar value. Consequently, $\\\\text{dist} (1, \\\\phi_{\\\\omega}(s_t, a_t)) = |1 - \\\\phi_{\\\\omega}(s_t, a_t)|$ and $\\\\text{dist} (\\\\phi_{\\\\omega}(s_t, a_t), \\\\tilde{\\\\phi}_ {\\\\omega}(s_t, a_t)) = |\\\\phi_{\\\\omega}(s_t, a_t) - \\\\tilde{\\\\phi}_{\\\\omega}(s_t, a_t)|$. This selection of the $l_1$-norm is now explicitly clarified in Section 4.3 of the revised manuscript.\\n\\n&nbsp;*6. 'In algorithm 1, ``Updating $\\\\lambda$ by minimizing the loss $\\\\lambda\\\\mathbb{E}[c(\\\\tau)-\\\\epsilon]$, why is no reward term involved here to update?'*\\n\\n**Response.** We appreciate your question. This stems from the inherent properties of the standard Lagrangian approach to constrained optimization.\\n\\nAs detailed in [2, 4], the dual problem for Constrained Reinforcement Learning (CRL) can be formulated as:\\n\\n$\\nD^* = \\\\arg\\\\min_{\\\\lambda} [V_r + \\\\lambda(V_c - \\\\epsilon)]\\n$\\n\\n(see (14) in [2]). To update $\\\\lambda$, we need to conduct the gradient descent for the above objective:\\n\\n$\\n\\\\lambda_{k+1} = \\\\lambda_k - \\\\eta \\\\partial_{\\\\lambda}d(\\\\lambda)\\n$\\n\\nbased on equation (20) in [2], and $\\\\partial d(\\\\lambda) = \\\\partial_{\\\\lambda}[V_r + \\\\lambda(V_c - \\\\epsilon)] = (V_c - \\\\epsilon) = \\\\mathbb{E}[c(\\\\tau) - \\\\epsilon]\\n$. \\n\\nThis leads to the loss function for $\\\\lambda$ updates presented in our Algorithm 1: $\\\\lambda \\\\mathbb{E}[c(\\\\tau) - \\\\epsilon]$.\\n\\nAs demonstrated in our derivation above, the reward term is not involved in the update of $\\\\lambda$. The reward value $V_r$ is to be maximized in the original (PI) rather than to be constrained, thus it is decoupled from Lagrange multiplier $\\\\lambda$ in the dual problem objective.\\n\\n&nbsp;*7. 'Another question related to this in Table 2: there is a significant discrepancy between the magnitudes of the Reward and Cost. Could you provide some insight into this?'*\\n\\n**Response.** Thanks for raising this concern. The discrepancy between the magnitudes of reward and cost in the experiments \\nstems from the semantic difference and physical meaning distinctions between rewards and costs in the chosen environments.\\n\\nSpecifically, in the MuJoCo locomotion environments detailed in Table 2, the cumulative feasible reward signifies the average distance traveled by the HalfCheetah, Walker, or Ant agent before either termination or violation of the constraints. Conversely, the cost represents the average number of timesteps during which the constraint was violated, measured within the 1000-timestep limit or until termination. Crucially, the environment assigns a cost of 1 for each timestep involving a constraint violation and a cost of 0 otherwise. This inherent distinction creates a significant discrepancy between reward and cost values. This reward and cost design is a standard practice within the Constrained Reinforcement Learning (CRL) and Inverse Constrained Reinforcement Learning (ICRL) literature, as noted in [3].\\n\\n**References**\\n\\n[1] Janner, Michael, et al. \\\"Planning with Diffusion for Flexible Behavior Synthesis.\\\" International Conference on Machine Learning. PMLR, 2022. \\n\\n[2] Paternain, Santiago, et al. \\\"Constrained reinforcement learning has zero duality gap.\\\" Advances in Neural Information Processing Systems 32 (2019).\\n\\n[3] Liu, G., Xu, S., Liu, S., Gaurav, A., Subramanian, S. G., \\\\& Poupart, P. (2024). A Comprehensive Survey on Inverse Constrained Reinforcement Learning: Definitions, Progress and Challenges.\\n\\n[4] Boyd, S., \\\\& Vandenberghe, L. (2004). Convex optimization.\"}", "{\"comment\": \"Thank you for your comprehensive response and additional experiments. I have raised my score to 6. One more question, why is BC's cost in Table 5 much lower than Ex-ICL? Does this suggest that ICL methods are unnecessary to avoid constraint violation for this problem?\"}", "{\"title\": \"Encouraging the reviewers to participate in discussion\", \"comment\": \"Hello, I encourage the reviewers to participate in discussion with the authors! Please recall that this phase of the discussion period ends on Nov 26th, AoE.\\n\\nTo the authors eager to see author responses, I don't think there is anything to worry about yet. 3 days remain in the response period. Moreover it is currently the weekend (Saturday), and is thus understandable that the reviewers are not yet checking their OpenReview clients :)\"}", "{\"comment\": \"We are deeply grateful for the reviewer\\u2019s feedback and the significant time and effort invested in reviewing our manuscript. Their insightful comments have been invaluable in enhancing both the clarity of our work. Thank you very much!\", \"title\": \"Author response to Reviewer X5Jn - Part 3\"}", "{\"title\": \"Author response to Reviewer zyAA - Part 3\", \"comment\": \"We would like to express our sincere gratitude for the reviewer\\u2019s constructive criticism and the considerable time and effort dedicated to evaluating our manuscript. Their insightful comments have proven invaluable in improving the clarity and overall quality of our work. Thanks a lot!\", \"as_for_the_insightful_concern_the_reviewer_raised\": \"&nbsp;*8. 'why is BC's cost in Table 5 much lower than Ex-ICL? Does this suggest that ICL methods are unnecessary to avoid constraint violation for this problem?'*\\n\\n**Response.** We appreciate the reviewer's important observation. In the meantime, another critical observation is that BC yields a low cumulative reward, indicating a failure to meet the navigation objective within the designated time. This result suggests that there is no incentive for the BC agent to prioritize acceleration for reducing travel time, which consequently avoids violations of the velocity constraint. Essentially, BC learns a conservative or \\\"dummy\\\" policy that satisfies the constraint but is far from optimal. In contrast, our proposed method\\u2019s comparatively high reward indicates successful navigation. This necessitates a learning strategy that incorporates acceleration, increasing the likelihood of constraint violations and thus resulting in a higher cumulative cost > 0 rate. Importantly, the cost rate of our method remains lower than the ICSDICE [1] baseline, suggesting superior constraint modeling while maintaining a comparable reward performance.\"}", "{\"title\": \"Author response to Reviewer 56R9 - Part 2\", \"comment\": \"&nbsp;*6. 'Do the authors have some intuition why their method seems to outperform baselines significantly for HalfCheetah, marginally for Limited-Walker and only ties for Blocked-Ant?'*\\n\\n**Response.** Thank you for raising this important concern. In fact, our methods have achieved comparable performance to the expert agent across all 3 environments. The significant lead of HalfCheetah is because other baselines having difficulty in resolving the HalfCheetah environment.\\n\\nWe have reported the reward and cost performance of our method, baseline [3] method, and expert and suboptimal demonstrations in Section C of the Appendix of our revised paper to support this claim. It shows that our method\\u2019s performance closely aligns with that of the expert policy under all environments. On the contrary, the baseline [3] method rated high in Limited-Walker and Blocked-Ant environments while failing to achieve expert-level performance in the Obstacle-HalfCheetah environment.\\n\\n&nbsp;*7. 'In the MuJoCo experiments, is the reward presented in Table 2 the feasible reward? I.e. are rewards truncated after a constraint has been violated? It seems that that would be the more inveresting metric to report, I would recommend the authors report that metric, and if they already do so make it clear it is that metric.'*\\n\\n**Response.** Yes, the reward values reported in Table 2 represent the feasible reward, which is the cumulative reward obtained before any constraint violation occurs. This metric is consistent with the approach used in [3]. This clarification has been added to Table 2 in the revised version of the paper.\\n\\n**References**\\n\\n[1] Janner, Michael, et al. \\\"Planning with Diffusion for Flexible Behavior Synthesis.\\\" International Conference on Machine Learning. PMLR, 2022. \\n\\n[2] Liu, G., Xu, S., Liu, S., Gaurav, A., Subramanian, S. G., \\\\& Poupart, P. (2024). A Comprehensive Survey on Inverse Constrained Reinforcement Learning: Definitions, Progress and Challenges.\\n\\n[3] Guorui Quan, Zhiqiang Xu, Guiliang Liu (2024). Learning Constraints from Offline Demonstrations via Superior Distribution Correction Estimation. In Forty-first International Conference on Machine Learning.\"}" ] }
0UO1mH3Iwv
Edge-aware Image Smoothing with Relative Wavelet Domain Representation
[ "Huiqing QI", "Xiaoliu Luo", "Tingting Li", "Fang Li" ]
Image smoothing is a fundamental technique in image processing, designed to eliminate perturbations and textures while preserving dominant structures. It plays a pivotal role in numerous high-level computer vision tasks. More recently, both traditional and deep learning-based smoothing methods have been developed. However, existing algorithms frequently encounter issues such as gradient reversals and halo artifacts. Furthermore, the smoothing strength of deep learning-based models, once trained, cannot be adjusted for adapting different complexity levels of textures. These limitations stem from the inability of previous approaches to achieve an optimal balance between smoothing intensity and edge preservation. Consequently, image smoothing while maintaining edge integrity remains a significant challenge. To address these challenges, we propose a novel edge-aware smoothing model that leverages a relative wavelet domain representation. Specifically, by employing wavelet transformation, we introduce a new measure, termed Relative Wavelet Domain Representation (RWDR), which effectively distinguishes between textures and structures. Additionally, we present an innovative edge-aware scale map that is incorporated into the adaptive bilateral filter, facilitating mutual guidance in the smoothing process. This paper provides complete theoretical derivations for solving the proposed non-convex optimization model. Extensive experiments substantiate that our method has a competitive superiority with previous algorithms in edge-preserving and artifact removal. Visual and numerical comparisons further validate the effectiveness and efficiency of our approach in several applications of image smoothing.
[ "Image smoothing", "Wavelet transformation", "Relative wavelet domain representation", "Edge-preserving", "Non-convex optimization" ]
Accept (Poster)
https://openreview.net/pdf?id=0UO1mH3Iwv
https://openreview.net/forum?id=0UO1mH3Iwv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vegIYxOiBR", "uLVbRCBgOZ", "qVVWXJA4Du", "qRMqKNWnBp", "nkMxihkZup", "k5rxqnA75I", "f9IExP3Rro", "cEkB4FHUR3", "UR8TyFqIF6", "TgFxRSdylX", "GpbKqN850r", "Fp4tNO4kZm", "Ao8FXBBANT", "3BxAEWjkMb" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1732283180459, 1732605444345, 1732331230283, 1730650128221, 1730620833331, 1734520354704, 1732286767426, 1730511723728, 1732285229126, 1732285482209, 1732286985252, 1732283961615, 1732627900304, 1737523580771 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3514/Authors" ], [ "ICLR.cc/2025/Conference/Submission3514/Reviewer_bdtH" ], [ "ICLR.cc/2025/Conference/Submission3514/Authors" ], [ "ICLR.cc/2025/Conference/Submission3514/Reviewer_bdtH" ], [ "ICLR.cc/2025/Conference/Submission3514/Reviewer_kXjP" ], [ "ICLR.cc/2025/Conference/Submission3514/Area_Chair_sJdH" ], [ "ICLR.cc/2025/Conference/Submission3514/Authors" ], [ "ICLR.cc/2025/Conference/Submission3514/Reviewer_M1ZN" ], [ "ICLR.cc/2025/Conference/Submission3514/Authors" ], [ "ICLR.cc/2025/Conference/Submission3514/Authors" ], [ "ICLR.cc/2025/Conference/Submission3514/Authors" ], [ "ICLR.cc/2025/Conference/Submission3514/Authors" ], [ "ICLR.cc/2025/Conference/Submission3514/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Official Comment by Authors\", \"comment\": \"We appreciate Reviewer bdtH for their detailed feedback and thoughtful questions. We respectfully address each concern below:\\n***\\n**[Q1] Regarding the objective metrics:**\\n\\nThanks for your suggestions, we reviewed a large amount of relevant literature and found that no objective metrics can be directly used to measure the quality of smoothed images. For the downstream tasks, **we utilize five no-reference image quality objective evaluation metrics: BRISQUE [1], PIQE [2], SSEQ [3], ILNIQUE [4], and CEIQ [5] to compare performance on image detail enhancement and HDR tone mapping tasks**. The smaller values of BRISQUE and PIQE denote higher-quality images. While the larger values of SEQ, ILNIQUE, and CEIQ indicate higher-quality images. The quantitative results corresponding to Figure 6 are in the following Table:\\n| Methods | BRISQUE $\\\\downarrow$ | PIQE $\\\\downarrow$ | SSEQ $\\\\uparrow$ | ILNIQUE $\\\\uparrow$ |\\n|:--------------:|:----------------------------:|:--------------------------:|:------------------------:|:--------------------------:|\\n| L0 | 10.6974 | 35.6176 | 24.7566 | 117.81 |\\n| ILS | 23.5255 | 37.0118 | 17.7950 | 122.43 |\\n| DRTV | 8.5623 | 33.5745 | 25.1130 | 124.46 |\\n| RoG | 14.9437 | 34.6882 | 24.5738 | 124.32 |\\n| muGIF | 20.6328 | 36.6683 | 25.1297 | 121.44 |\\n| dRTV | 15.4301 | 25.1630 | 9.9543 | 124.49 |\\n| Ours | **7.6912** | **24.9118** | **26.7940** | **128.84** |\\n\\n***\", \"we_present_the_quantitative_numerical_results_corresponding_to_figure_14_in_the_following_table\": \"| Methods | BRISQUE$\\\\downarrow$ | PIQE $\\\\downarrow$ | SSEQ $\\\\uparrow$ | ILNIQUE$\\\\uparrow$ |\\n|:-------:|:-------------------:|:-----------------:|:---------------:|:-----------------:|\\n| RGF | 28.8617 | 30.1807 | 23.7323 | 117.56 |\\n| ResNet | 22.3801 | 36.4881 | 26.4587 | 120.12 |\\n| muGIF | 16.0524 | 30.2385 | 19.8289 | 123.25 |\\n| VDCNN | 28.9499 | 35.4522 | 26.1938 | 123.85 |\\n| Ours | **9.6503** | **26.8804** | **31.5996** | **126.78** |\\n***\", \"we_present_the_quantitative_numerical_results_corresponding_to_figure_15_in_the_following_table\": \"| Methods | BRISQUE$\\\\downarrow$ | PIQE$\\\\downarrow$ | SSEQ $\\\\uparrow$ | ILNIQUE$\\\\uparrow$ | CEIQ$\\\\uparrow$ |\\n|:-------:|:-------------------:|:----------------:|:---------------:|:-----------------:|:--------------:|\\n| L0 | 24.8086 | 28.6853 | 12.3523 | 129.59 | 2.6947 |\\n| dRTV | 23.7972 | 26.9161 | 16.8464 | 129.16 | 2.6867 |\\n| RGF | 24.3680 | 27.2629 | 15.6265 | 126.34 | 2.7728 |\\n| Ghosh | 23.7344 | 38.8357 | 15.3685 | 125.49 | 2.0085 |\\n| RoG | 28.1463 | 28.8937 | 12.1442 | 135.53 | 2.8496 |\\n| muGIF | 22.8831 | 32.4163 | 13.2606 | 132.94 | 2.6515 |\\n| RTV | 20.7134 | 30.0749 | 16.1376 | 133.17 | 2.6804 |\\n| Ours | **14.8449** | **24.8736** | **17.9714** | **135.69** | **2.8867** |\\n***\\nThese results demonstrate that our model achieves significant superiority over other methods in downstream tasks of smoothed images.\\n***\\n**References:**\\n\\n[1] Mittal, Anish, Anush Krishna Moorthy, and Alan Conrad Bovik. \\\"No-reference image quality assessment in the spatial domain.\\\" IEEE Transactions on image processing 21.12 (2012): 4695-4708.\\n\\n[2] Venkatanath, Narasimhan, et al. \\\"Blind image quality evaluation using perception based features.\\\" 2015 twenty first national conference on communications (NCC). IEEE, 2015.\\n\\n[3] Liu, Lixiong, et al. \\\"No-reference image quality assessment based on spatial and spectral entropies.\\\" Signal processing: Image communication 29.8 (2014): 856-863.\\n\\n[4] Zhang, Lin, Lei Zhang, and Alan C. Bovik. \\\"A feature-enriched completely blind image quality evaluator.\\\" IEEE Transactions on Image Processing 24.8 (2015): 2579-2591.\\n\\n[5] Yan, Jia, Jie Li, and Xin Fu. \\\"No-reference quality assessment of contrast-distorted images using contrast enhancement.\\\" arXiv preprint arXiv:1904.08879 (2019).\"}", "{\"comment\": \"Thanks for the rebuttal. I suggest adding an analysis of failure cases in the final version. My score remains unchanged.\"}", "{\"title\": \"Response Summary\", \"comment\": \"We sincerely appreciate all reviewers' thorough and constructive comments. We are pleased that the reviewers recognized our method as **reasonable and novel (Reviewer bdtH)**, **complete theoretical guarantee and superior performance (Reviewer kXjP)**, and our **fluent and academic writing (Reviewer M1ZN)**.\", \"our_main_responses_are_as_following_five_folds\": \"1. Provided comprehensive **objective qualitative results** in the tables below;\\n\\n2. Clarified the **generation mechanism of the detail layer** in Figure 1;\\n\\n3. Conducted **detailed ablation studies** in Figure 8 and Figure 9;\\n\\n4. Clarified our method's **parameters sensitive analysis and recommended settings**;\\n\\n5. Provided the **specific visual task analysis** and the importance of our approach in real-world applications.\\n\\nNotably, all these improvements are incorporated into our revised manuscript while maintaining clarity and technical depth.\\n***\\nWe are grateful for helping us improve our manuscript\\u2019s quality and completeness.\"}", "{\"summary\": \"The main contribution of this work is the introduction of RWDR that effectively distinguishes textures from primary structures and preserves weaker edges. Additionally, the paper proposes an innovative edge-aware scale map method that dynamically adjusts scale based on the image structure, resulting in clearer distinctions between structure and texture. Experimental results demonstrate that the proposed approach provides superior edge-preserving smoothing compared to existing methods.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. This paper introduces relative wavelet domain representation into bilateral filtering, which is reasonable and novel.\\n\\n2. The method achieves superior visual results compared to previous studies. \\n\\n3. The paper includes comprehensive theoretical derivations, technical descriptions, and runtime analysis of the algorithm.\", \"weaknesses\": \"1. The paper provides extensive visual results, but I\\u2019m curious how different algorithms are objectively evaluated based on visual quality. The authors should consider comparing performance on downstream tasks with objective metrics. A user study could also statistically confirm the advantages of the proposed method.\\n\\n2. As a new method, it likely performs well in certain scenarios. However, I am more interested in its robustness and stability. In other words, can the authors provide a lower bound for the algorithm's performance? In which scenarios might it fail? Additionally, how sensitive is the algorithm to parameter changes?\", \"questions\": \"Could the authors provide an online demo to allow users to test the method easily? While it\\u2019s not essential for acceptance, it would add value for potential users.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The author introduces a mutually guided edge-aware smoothing model based on relative wavelet domain representation. Their proposed RWDR serves as a novel measure for effectively differentiating between textures and structures.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The solution of the proposed model is supported by a complete theoretical guarantee, which is a strong point.\\n2. Extensive experiments prove that the proposed method outperforms existing algorithms in mitigating gradient reversals,\\n staircase artifacts, and halos and achieves a superior performance in balancing smoothing strength and edge preservation.\", \"weaknesses\": \"1. Though the authors support their claims by extensive qualitative results, but they should also provide the quantitative results to validate their points in the main paper or at least in the supplementary. For instance, the authors can include PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural Similarity Index), or MSE (Mean Squared Error) on standard synthetic benchmark datasets and LPIPS, MUSIQ, NIQE, MANIQA for real-world tasks. This would allow for a more objective comparison with existing method.\\n2. The method section needs to be refined, as mentioned in Fig1 (that the detail enhancement image is boosted by four detail layers), this statement is not explained in the method section, how the four detail layers are being generated , is it from the wavelet decomposition?\\n3. The paper has lacks ablation study. The authors have given mathematical proofs of choosing the particular operations like RWDR and the edge-aware scale map, they should also try to prove the effectiveness of each proposed component on the overall model.\\n4. The authors should also try to report the results on some real-world applications in Super-Resolution (RealSR, DrealSR,RealLR200), denoising (SIDD, DND) that would further prove the use of the proposed model, if the time permits and should also check on synthetic SR datasets like Manga109, Urban100, and BSD68 (for denoising).\", \"questions\": \"Please check the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes an effective Relative Wavelet Domain Representation (RWDR) for edge-preserving image smoothing. Experimental results show that the proposed method preserve main edges well.\\n\\nThe major concerns of reviewers including adding more experimental results (e.g., subjective evaluations) and limitation analysis. \\n\\nIn the rebuttal, the authors solve the concerns of reviewers. Based on the recommendations of reviewers, the paper can be accepted.\", \"additional_comments_on_reviewer_discussion\": \"The major concerns of reviewers including adding more experimental results (e.g., subjective evaluations) and limitation analysis. During the discussion stage, the reviewers are satisfied with the response of the authors.\"}", "{\"comment\": \"We sincerely thank Reviewer kXjP for their careful review and insightful questions. We respectfully address each concern below:\\n***\\n**[Q1] Regarding the quantitative results:**\\n\\nThanks for your thoughtful suggestions. It is worth noting that there are no general agreement objective metrics to directly measure image quality for smooth images. However, we can use no-reference image quality metrics to evaluate the performance of smoothing downstream tasks, which indirectly illustrates the performance of smoothing methods. We have utilized PSNR and SSIM to evaluate the smoothing performance in the artifact removal task, as shown in Table 1. The MSE metric has been adopted to illustrate smoothing performance in Figure 8. To provide more quantitative results to validate our points, **we utilize five no-reference image quality objective evaluation metrics: BRISQUE [1], PIQE [2], SSEQ [3], ILNIQUE [4], and CEIQ [5] to compare performance on image detail enhancement and HDR tone mapping tasks**. The smaller values of BRISQUE and PIQE denote higher-quality images. While the larger values of SEQ, ILNIQUE, and CEIQ indicate higher-quality images. The quantitative results corresponding to Figure 6 are in the following Table:\\n| Methods | BRISQUE $\\\\downarrow$ | PIQE $\\\\downarrow$ | SSEQ $\\\\uparrow$ | ILNIQUE $\\\\uparrow$ |\\n|:------:|:----:|:-----:|:--------:|:------:|\\n| L0 | 10.6974 | 35.6176 | 24.7566| 117.81 |\\n| ILS | 23.5255 | 37.0118 | 17.7950 | 122.43 |\\n| DRTV | 8.5623 | 33.5745 | 25.1130 | 124.46 |\\n| RoG | 14.9437 | 34.6882 | 24.5738| 124.32 |\\n| muGIF | 20.6328 | 36.6683 | 25.1297| 121.44 |\\n| dRTV | 15.4301 | 25.1630 | 9.9543 | 124.49 |\\n| Ours | **7.6912** | **24.9118** | **26.7940** | **128.84** |\\n***\", \"we_present_the_quantitative_numerical_results_corresponding_to_figure_14_in_the_following_table\": \"| Methods | BRISQUE$\\\\downarrow$ | PIQE $\\\\downarrow$ | SSEQ $\\\\uparrow$ | ILNIQUE$\\\\uparrow$ |\\n|:----:|:-----:|:-----:|:------:|:-----:|\\n| RGF | 28.8617 | 30.1807 | 23.7323 | 117.56 |\\n| ResNet | 22.3801| 36.4881| 26.4587 | 120.12 |\\n| muGIF | 16.0524 | 30.2385 | 19.8289 | 123.25 |\\n| VDCNN | 28.9499 | 35.4522| 26.1938| 123.85|\\n| Ours | **9.6503** | **26.8804**| **31.5996** | **126.78**|\\n***\", \"we_present_the_quantitative_numerical_results_corresponding_to_figure_15_in_the_following_table\": \"| Methods | BRISQUE$\\\\downarrow$ | PIQE$\\\\downarrow$ | SSEQ $\\\\uparrow$ | ILNIQUE$\\\\uparrow$ | CEIQ$\\\\uparrow$ |\\n|:---:|:-----:|:----:|:------:|:-----:|:----:|\\n| L0 | 24.8086 | 28.6853| 12.3523 | 129.59| 2.6947|\\n| dRTV | 23.7972 | 26.9161| 16.8464| 129.16| 2.6867|\\n| RGF | 24.3680 | 27.2629 | 15.6265 | 126.34 | 2.7728 |\\n| Ghosh | 23.7344 | 38.8357 | 15.3685| 125.49| 2.0085|\\n| RoG | 28.1463 | 28.8937 | 12.1442| 135.53| 2.8496|\\n| muGIF | 22.8831| 32.4163 | 13.2606 | 132.94| 2.6515 |\\n| RTV | 20.7134 | 30.0749 | 16.1376 | 133.17| 2.6804 |\\n| Ours | **14.8449** | **24.8736**| **17.9714** | **135.69** | **2.8867** |\\n***\\nThese results demonstrate that our model achieves significant superiority over other methods in downstream tasks of smoothed images.\\n***\\n**References:**\\n\\n[1] Mittal, Anish, Anush Krishna Moorthy, and Alan Conrad Bovik. \\\"No-reference image quality assessment in the spatial domain.\\\" IEEE Transactions on image processing 21.12 (2012): 4695-4708.\\n\\n[2] Venkatanath, Narasimhan, et al. \\\"Blind image quality evaluation using perception based features.\\\" 2015 twenty first national conference on communications (NCC). IEEE, 2015.\\n\\n[3] Liu, Lixiong, et al. \\\"No-reference image quality assessment based on spatial and spectral entropies.\\\" Signal processing: Image communication 29.8 (2014): 856-863.\\n\\n[4] Zhang, Lin, Lei Zhang, and Alan C. Bovik. \\\"A feature-enriched completely blind image quality evaluator.\\\" IEEE Transactions on Image Processing 24.8 (2015): 2579-2591.\\n\\n[5] Yan, Jia, Jie Li, and Xin Fu. \\\"No-reference quality assessment of contrast-distorted images using contrast enhancement.\\\" arXiv preprint arXiv:1904.08879 (2019).\\n***\\n**[Q2] Regarding the four detail layers in Figure 1:**\\n\\nWe are grateful for your insightful comments. Figure 1 shows the visual effects of the image detail enhancement. It aims to enhance high-frequency regions by incorporating a detail layer into the input image. The core of this technology involves **extracting the high-frequency detail layer by subtracting the smoothed image from the original input**. In other words, **the four details are made by repeating four times the extracted high-frequency detail layer, which is not from the wavelet decomposition**. We presented the detailed process and visual results in the image detail enhancement part of our manuscript. We have added the above-mentioned description in Figure 1.\"}", "{\"summary\": \"In this article, the author reviews image smoothing methods based on local information, global information, and deep learning, and discusses the limitations of current image smoothing techniques when dealing with image textures and image structural edges. To address this issue, first, the author proposes a novel edge-aware smoothing model that more effectively distinguishes between image textures and image structures through relative wavelet domain representation (RWDR). Second, the author reintroduces edge-aware scale maps into bilateral filters to improve image edges during the smoothing process. Finally, the author demonstrates the superiority of this method in texture preservation and artifact removal after image smoothing through comprehensive theoretical derivations and experimental results compared to other algorithms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.This paper proposes a relative wavelet domain representation and an edge-aware smoothing model, achieving certain progress in image smoothing technology; 2.The paper utilizes extensive theoretical proofs to establish a mathematical model for the relative wavelet domain. The experimental validation is well-supported by theory; 3.The writing of this paper is relatively fluent and conforms to the standards of English academic writing.\", \"weaknesses\": \"1.In the experimental part of this paper, there is a predominance of qualitative analysis of images. However, due to the significant subjective factors inherent in qualitative experiments, supplementing with more quantitative experiments would enhance the persuasiveness of the results; 2.Image smoothing operations, as one of the fundamental image processing tasks, play a crucial role in various visual tasks. However, the paper seems to lack exploration of specific visual tasks (for example, in super-resolution tasks, the textures and structures preserved after image smoothing are vital for image reconstruction).\", \"questions\": \"1.In the experimental part of this paper, there is a predominance of qualitative analysis of images. However, due to the significant subjective factors inherent in qualitative experiments, supplementing with more quantitative experiments would enhance the persuasiveness of the results; 2.Image smoothing operations, as one of the fundamental image processing tasks, play a crucial role in various visual tasks. However, the paper seems to lack exploration of specific visual tasks (for example, in super-resolution tasks, the textures and structures preserved after image smoothing are vital for image reconstruction).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We greatly appreciate Reviewer M1ZN for their positive assessment and constructive feedback. We address each question as follows:\\n***\\n**[Q1] Regarding quantitative results:**\\n\\nThanks for your insightful comments. Image smoothing reduces image contents over the input image. Therefore, there are no general agreement objective metrics to directly measure image quality for smooth images. We can use no-reference image quality metrics to evaluate the performance of smoothing downstream tasks, which indirectly illustrates the performance of smoothing methods. \\n\\n**We utilize five no-reference image quality objective evaluation metrics: BRISQUE [1], PIQE [2], SSEQ [3], ILNIQUE [4], and CEIQ [5] to compare performance on image detail enhancement and HDR tone mapping tasks**. The smaller values of BRISQUE and PIQE denote higher-quality images. While the larger values of SEQ, ILNIQUE, and CEIQ indicate higher-quality images. The quantitative results corresponding to Figure 6 are in the following Table:\\n| Methods | BRISQUE $\\\\downarrow$ | PIQE $\\\\downarrow$ | SSEQ $\\\\uparrow$ | ILNIQUE $\\\\uparrow$ |\\n|:------:|:----:|:-----:|:--------:|:------:|\\n| L0 | 10.6974 | 35.6176 | 24.7566| 117.81 |\\n| ILS | 23.5255 | 37.0118 | 17.7950 | 122.43 |\\n| DRTV | 8.5623 | 33.5745 | 25.1130 | 124.46 |\\n| RoG | 14.9437 | 34.6882 | 24.5738| 124.32 |\\n| muGIF | 20.6328 | 36.6683 | 25.1297| 121.44 |\\n| dRTV | 15.4301 | 25.1630 | 9.9543 | 124.49 |\\n| Ours | **7.6912** | **24.9118** | **26.7940** | **128.84** |\\n***\", \"we_present_the_quantitative_numerical_results_corresponding_to_figure_14_in_the_following_table\": \"| Methods | BRISQUE$\\\\downarrow$ | PIQE $\\\\downarrow$ | SSEQ $\\\\uparrow$ | ILNIQUE$\\\\uparrow$ |\\n|:----:|:-----:|:-----:|:------:|:-----:|\\n| RGF | 28.8617 | 30.1807 | 23.7323 | 117.56 |\\n| ResNet | 22.3801| 36.4881| 26.4587 | 120.12 |\\n| muGIF | 16.0524 | 30.2385 | 19.8289 | 123.25 |\\n| VDCNN | 28.9499 | 35.4522| 26.1938| 123.85|\\n| Ours | **9.6503** | **26.8804**| **31.5996** | **126.78**|\\n***\", \"we_present_the_quantitative_numerical_results_corresponding_to_figure_15_in_the_following_table\": \"| Methods | BRISQUE$\\\\downarrow$ | PIQE$\\\\downarrow$ | SSEQ $\\\\uparrow$ | ILNIQUE$\\\\uparrow$ | CEIQ$\\\\uparrow$ |\\n|:---:|:-----:|:----:|:------:|:-----:|:----:|\\n| L0 | 24.8086 | 28.6853| 12.3523 | 129.59| 2.6947|\\n| dRTV | 23.7972 | 26.9161| 16.8464| 129.16| 2.6867|\\n| RGF | 24.3680 | 27.2629 | 15.6265 | 126.34 | 2.7728 |\\n| Ghosh | 23.7344 | 38.8357 | 15.3685| 125.49| 2.0085|\\n| RoG | 28.1463 | 28.8937 | 12.1442| 135.53| 2.8496|\\n| muGIF | 22.8831| 32.4163 | 13.2606 | 132.94| 2.6515 |\\n| RTV | 20.7134 | 30.0749 | 16.1376 | 133.17| 2.6804 |\\n| Ours | **14.8449** | **24.8736**| **17.9714** | **135.69** | **2.8867** |\\n***\\nThese results demonstrate that our model achieves significant superiority over other methods in downstream tasks of smoothed images.\\n***\\n**References:**\\n\\n[1] Mittal, Anish, Anush Krishna Moorthy, and Alan Conrad Bovik. \\\"No-reference image quality assessment in the spatial domain.\\\" IEEE Transactions on image processing 21.12 (2012): 4695-4708.\\n\\n[2] Venkatanath, Narasimhan, et al. \\\"Blind image quality evaluation using perception based features.\\\" 2015 twenty first national conference on communications (NCC). IEEE, 2015.\\n\\n[3] Liu, Lixiong, et al. \\\"No-reference image quality assessment based on spatial and spectral entropies.\\\" Signal processing: Image communication 29.8 (2014): 856-863.\\n\\n[4] Zhang, Lin, Lei Zhang, and Alan C. Bovik. \\\"A feature-enriched completely blind image quality evaluator.\\\" IEEE Transactions on Image Processing 24.8 (2015): 2579-2591.\\n\\n[5] Yan, Jia, Jie Li, and Xin Fu. \\\"No-reference quality assessment of contrast-distorted images using contrast enhancement.\\\" arXiv preprint arXiv:1904.08879 (2019).\"}", "{\"comment\": \"**[Q2] Regarding the exploration of specific visual tasks:**\\n\\nWe are grateful for your thoughtful suggestions. Image smoothing is one of the fundamental image processing tasks and plays a crucial role in various visual tasks. In our manuscript, we have introduced three specific visual tasks, including detail enhancement, HDR tone mapping, and comparison artifact removal.\\n\\n1. For image **detail enhancement tasks**, edges and main structures preserved after image smoothing are vital for detail enhancement that can reduce halo artifacts and gradient reversals. We have presented detailed illustrations in line 417 of the Experiment Section.\\n\\n2. For **compression artifact removal tasks**, the goal is to remove compression blocks along edges via smoothing. The edges and main structures preserved are essential for keeping complete information of the input image. The related description can be found in the Section compression artifact removal in line 431 of our manuscript. \\n\\n3. For **HDR tone mapping tasks**, the more edges and main structures preserved, the HDR tone mapping image can produce fewer staircase artifacts along edges and halo artifacts. We have presented detailed explanations in line 1003 of our manuscript.\\n\\nIt is worth noting that the proposed model is **not suitable for super-resolution tasks**, but this is a **great potential research topic, prompting that we can extend our model to super-resolution tasks** by designing specific priors in future work.\\n***\\nThank you for helping us improve the clarity and completeness of our paper.\"}", "{\"comment\": \"**[Q3] Regarding the ablation study:**\\n\\nWe greatly appreciate your suggestions. The detailed ablation experiments of the proposed RWDR and the edge-aware scale map have been added in the Experiment section of our manuscript. **To assess the capability of RWDR in distinguishing between textures and structures, we conduct an ablation study on RWDR as shown in Figure 8. ** The model deployed without RWDR has mistreated texture as structure, leading to removing texture uncleanly. In contrast, **to assess the capability of the edge-aware scale map in edge preservation, we conduct an ablation study on the edge-aware scale map as shown in Figure 9.** The model deployed without the edge-aware scale map has smoothed textures cleanly while making main structures and edges lost and blurred.\\n***\\n**[Q4] Regarding the real-world applications:**\\n\\nWe sincerely thank your questions. We have presented **three real-world smoothing applications** in our manuscript, including **image detail enhancement, clip-art compression artifact removal, and HDR tone mapping**. The three real-world applications further prove the significant superiority of the proposed model over other methods. \\n\\nImage smoothing is to remove textures and perturbations that are larger than noise in size. **It is worth noting that the proposed model is not suitable for super-resolution and denoising tasks.** Additionally, the rebuttal time is not enough for us to extend our model to super-resolution and denoising tasks. However, these are **great potential research topics, prompting that we can further extend our model to super-resolution and denoising tasks in future work.**\\n***\\nWe sincerely appreciate your contributions in providing insightful advice to help us improve the quality of our manuscript.\"}", "{\"comment\": \"**[Q2] Regarding the user study:**\\n\\nWe greatly appreciate your suggestions. We carefully consider your suggestion about using a user study to statistically confirm the advantages of our model. **A user study needs to adopt enough amount of samples and sort out statistical questionnaires of users, which takes a lot of time. However, the rebuttal time is not enough to do that, we will add the user study to the extension journal version of our work if our manuscript is accepted.**\\n***\\n**[Q3] Regarding the fail scenario:**\\n\\nThanks for your detailed feedback and thoughtful questions. **Since there are no objective metrics to directly measure smoothing performance, we can not provide a quantitative lower bound for our algorithm's performance.** However, we have presented the limitations of our model in the Conclusion part. The proposed RWDR model also faces the challenge of addressing long-range texture dependencies like other state-of-the-art methods. **In other words, the RWDR model fails in the specific scenario with large irregular multiscale textures.** \\n***\\n**[Q4] Regarding the sensitivity analysis:**\\n\\nThanks for your comments. We have induced detailed settings of all parameters in our model, as presented in our appendix **B.1**. \\n\\n1. **For these parameters with fixed settings**, the proposed algorithm is not sensitive to their changes. Therefore, we provide the **recommended values for most scenarios**.\\n\\n2. For the rest parameters of our model, the performance of the proposed model is sensitive to their changes. **Values of these unfixed parameters should be fine-tuned according to the texture complexity.** We also give the **recommended range** of these parameter settings in our manuscript.\\n***\\n**[Q5] Regarding the online demo:**\\n\\nWe greatly appreciate your suggestions about providing an online demo to allow users to test the method easily. We have **uploaded our source code in the Supplementary Material**. At the current stage, users can test the method via source code. The **web demo is building**, we believe it will be released to the public soon.\\n***\\nWe have clarified these points in the rebuttal version. Thank you again for your contributions in helping us improve our paper.\"}", "{\"comment\": \"Thanks for your suggestion. We have added the analysis of failure cases in the final version **(as shown in Figure 17 and Section B.8 of the supplementary document).**\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
0ULf242ApE
From Context to Concept: Concept Encoding in In-Context Learning
[ "Jinyeop Song", "Seungwook Han", "Jeff Gore", "Pulkit Agrawal" ]
Humans distill complex experiences into fundamental abstractions, enabling rapid learning and adaptation. Similarly, autoregressive transformers exhibit adaptive learning through in-context learning (ICL), which begs the question of how. In this paper, we propose **concept encoding-decoding mechanism** to explain ICL by studying how transformers form internal abstractions in their representations. On synthetic ICL tasks, we analyze the training dynamics of a small transformer and report the coupled emergence of concept encoding and decoding. As the model learns to encode different latent concepts (e.g., ``Finding the first noun in a sentence.") into distinct, separable representations, it conditionally builds decoding algorithms and improve its ICL performance. We validate the existence of this mechanism across pretrained models of varying sizes (Gemma-2 2B/9B/27B, Llama-3.1 8B/70B). Further, through mechanistic interventions and controlled finetuning, we demonstrate that the quality of concept encoding is causally related and predictive of ICL performance. Our empirical insights shed light into better understanding the success and failure modes of large language models via their representations.
[ "mechanistic interpretability", "in-context learning", "large language models" ]
Reject
https://openreview.net/pdf?id=0ULf242ApE
https://openreview.net/forum?id=0ULf242ApE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kHbCm8kizl", "fLSFsn9EAN", "YUJSKiD9fZ", "RVG54o3LjR", "OKsd9QGRzp", "NknvRLDpHo", "IRKEJ1Zd66" ], "note_type": [ "official_review", "official_review", "official_review", "official_comment", "meta_review", "official_review", "decision" ], "note_created": [ 1730299553919, 1729511749732, 1731031644549, 1732573435844, 1734579141535, 1730403857108, 1737523742287 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6063/Reviewer_ybCA" ], [ "ICLR.cc/2025/Conference/Submission6063/Reviewer_HHCg" ], [ "ICLR.cc/2025/Conference/Submission6063/Reviewer_3fqh" ], [ "ICLR.cc/2025/Conference/Submission6063/Reviewer_s1LL" ], [ "ICLR.cc/2025/Conference/Submission6063/Area_Chair_BsRU" ], [ "ICLR.cc/2025/Conference/Submission6063/Reviewer_s1LL" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper studies in-context learning in transformer models through the bayesian lens of concept inference.\\nThey find that in a transformer trained on synthetic data, the model learns to separate tasks in its representation space, and this separation is important for task-specific prediction. They also study how concept encoding and decoding behavior emerges in transformers pre-trained on natural language, and find similar results.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well written and easy to follow, and the figures clearly communicate the experimental results. The authors test a variety of tasks to show the generality of the findings, though they are relatively simple in nature. There are several lines of evidence that support the paper's conclusions, and it appears there is an appropriate amount of caution in presenting results the authors are unsure about.\", \"weaknesses\": \"- The tasks studied here are rather simple. When tasks become more complicated, it's unclear whether the task-vector (and thus) concept inference hypothesis will hold. For example, in the synthetic setting, what happens as you increase the number of latent concepts to be learned? Do you find that more latent concepts causes the encoding to become less task-separable?\\n\\nSome of the experiments lack details that might help clarify some confusion/help with future replication. In particular, I have questions about the experiments in 4.3 and 4.4:\\n\\n- The experiment described in section 4.3 seems almost identical to the intervention experiment done by Hendel, et al. [1] to validate the \\\"task vector\\\" hypothesis (at least the positive case), but is missing experimental details. Are there any other differences in this setup besides the tasks, and testing with a \\\"null\\\" task vector? (e.g. do you patch at the final token/multiple tokens, \\n\\n- For the fine-tuning experiments in section 4.4, how can we be sure that the performance increase is due to the \\\"concept encoding\\\" and not something else? Can you describe your fine-tuning experiment setup in a bit more detail? Is each of these subplots a separate fine-tune, or do you fine-tune the layer set on all tasks at once? There are usually also performance gains for fine-tuning the last 10 layers as well. While not stated, it might be worth clarifying whether this paper's hypothesis for this increase is that fine-tuning the final layers strengthens the concept decoding capabilities of models.\\n\\nThe results could also be strengthened by showing these results hold across other model sizes and model families, since the only pretrained LLMs this paper studies is Llama 3 8B (with some training checkpoints results on OLMo-7B). I'd be curious how separability of the representations change across model sizes of the same family (for example - Llama 8B & 70B), or Pythia (Biderman, et al. [2]). Though, as it stands, the current results are reasonable evidence for the claims made.\\n\\n___\\n\\nMinor Notes (not worried about this, but just noticed while reading through):\\n- Misspelling in Line 297: \\\"overt\\\" -> \\\"over\\\"\\n- Mis-capitalization in Line 790: In this section, \\\"We\\\" -> we\\n- Misspelling in Line 862: \\\"synthetinc\\\" -> synthetic\\n\\n___\\n[1] Hendel, et al. In-Context Learning Creates Task Vectors. 2024. (https://aclanthology.org/2023.findings-emnlp.624/)\\n\\n[2] Biderman, et al. Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling. 2023. (https://proceedings.mlr.press/v202/biderman23a.html)\", \"questions\": \"- In some recent work, Mittal, et al. [3] suggest that inferring latent variables doesn't necessarily improve ICL performance, and that the \\\"task vector\\\" view of ICL may be due to parametric shortcuts that are learned by transformers for certain tasks. I'm curious whether this paper's findings complement, support, or contradict this argument.\\n___\\n[3] Mittal, et al. Does learning the right latent variables necessarily improve in-context learning? 2024. (https://arxiv.org/pdf/2405.19162)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors show that Transformers learn to solve certain tasks in-context by inferring/embedding contexts in separable representations. Conditioning in these task variables allows the Transformer to accurately predict in-context. The authors show this in experiments conducted both on toy-models trained to perform linear regression and on large-scale Transformers like Llama 8B. They also show that task separability correlates with ICL accuracy. This connection is also demonstrated using activation patching and fine-tuning on selective parts of the Transformer models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Research question is timely.\", \"The authors perform multiple experiments both with toy models and LLMs.\", \"The experiments and analyses are conceptually simple and neatly replicate findings from Hendel et al. 2023.\", \"Multiple types of interventions were used to validate the efficacy of the task representations, including fine-tuning and activation patching.\"], \"weaknesses\": \"While the experiments and analyses are sound, the results seem more like a replication of the findings from Hendel et al. 2023 or Todd et al. 2023. It is not clear how the concept encoding/decoding framework differs from the Task Vector framework, or why it is necessary to use the concept encoding/decoding framework in the first place. Would it not be fair to characterize the separable representations of the tasks in the ICL experiments as task vectors?\\n\\nAs such, the findings do not seem very novel or surprising in light of previous papers, like Hendel et al. 2023. While the results presented are interesting, I think the paper would benefit a lot from showing something that hadn't already been shown in previous works. For instance, the analysis showing that fine-tuning early layers of Llama improves the concept separability goes in this direction. At the very least, the authors can help explain why the findings are novel, why their framework is needed, or why performing experiments the way they were done improves our understanding of ICL beyond previous papers. Currently, the paper reads like it gives more credible evidence to the existence of task vectors, which is nice, but it choses to call it 'Concept encodings' instead, which is confusing and seems unnecessary.\\n\\nThe writing and explanations can also be improved in various places. The framework that is proposed, which makes reference to 'concepts' is confusing. Why not just stick with existing terminology like task vectors? The term 'Concept' is very loaded, and it is not clear that it adds anything to modelling ICL here. \\n\\nAt the same time I think the paper gives some nice supplementary evidence for the existence of task vectors. I would be willing to increase my score if the authors can address the above criticisms.\", \"there_are_also_some_typoes_and_weird_formulations\": [\"Line 122 \\\"Bayeisan\\\"\", \"Line 75, 261, 269 \\\"solve ICL\\\" doesn't seem quite right. The models learn to perform ICL, but ICL is not solved.\", \"Line 161 \\\"over the sequence of sequence of context length 20\\\"\", \"I don't think the quote at the beginning of the paper adds anything and I would recommend removing it.\"], \"questions\": [\"Why does context decodability peak in the middle layers and go down afterwards?\", \"In the first experiment, you perform UMAP on the layer activity to find the clusters. UMAP often exaggerates differences. Does kNN classification work here too?\", \"When you talk about \\\"layer activations\\\", do you mean residual stream representations, or the output of the transformer layers, which are added to the residual stream?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors investigate how concept understanding develops within transformers during training by studying a small model on synthetic tasks designed for in-context learning (ICL). They found that as the model learns to represent different underlying concepts (like identifying parts of a sentence), it also builds ways to decode these concepts, leading to better ICL performance. Examining a larger, pretrained model (Llama-3.1-8B), they show that its ability to encode concepts directly impacts its ICL effectiveness. Using techniques like controlled fine-tuning and targeted interventions, they demonstrate that improving concept encoding helps the model perform better on ICL tasks. They also experiment with prompting, finding that it can enhance concept encoding and ICL performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The study addresses an interesting and practical research question: understanding the mechanism behind in-context learning (ICL) in LLMs. This is an intriguing problem from a scientific perspective and has important implications for real-world applications.\", \"The authors designed a simple yet reasonable synthetic task to explore the model's emergent behavior in concept encoding and decoding. Although straightforward, the task is well-suited to investigate the research question.\", \"By training a small GPT model from scratch and prompting the Llama 8B model, the authors effectively examined the hidden representations of LLMs, revealing the coupled emergence of concept encoding and concept decoding.\"], \"weaknesses\": [\"The research question and task design draw on prior related work, which may limit the novelty of this work on these points. (But I still want to emphasize that the authors add an interesting twist by incorporating sparsity constraints into the sparse linear regression task, which is a valuable contribution of this work. )\", \"The hypotheses for each experiment and their specific contributions are not entirely clear. It is difficult to discern what each experiment aims to verify and whether the findings are novel.\", \"For example, in line 146, the authors state, \\\"we demonstrate the emergence of concept encoding and decoding are mutually reinforcing.\\\" However, the experimental results lack sufficient evidence to support this claim, which may make this assertion seem overstated. I would encourage the authors to clarify their findings throughout the paper, clearly distinguishing between reproductions of prior work and novel insights, whether in synthetic tasks or large-scale experiments.\"], \"questions\": \"1. What specific hypotheses are being tested in each experiment? Could you clarify these?\\n2. How does each experiment contribute to the overall research question? Can you make these connections more explicit?\\n3. Are the findings entirely new, or do they replicate previous results? It would help if you clearly identified which results are reproductions and which are novel insights.\\n4. In line 146, you mention that concept encoding and decoding are \\\"mutually reinforcing.\\\" Could you provide more evidence or context to support this claim? It may currently come across as overgeneralized.\\n5. How does adding sparsity constraints to the sparse linear regression task enhance the study? Could you explain this addition\\u2019s significance in more detail?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed response and for addressing my concerns.\\n\\nThe paper is now clearer to me, and I appreciate the additional experiments on different models and sizes, as well as the new layer visualizations and overlapping concepts.\\n\\nAfter reading other reviews and rebuttals, while acknowledging the concerns raised by others, particularly about similar contributions related to task vectors and fine tuning, I believe the authors have have effectively addressed my concerns so I have decided to raise my score.\"}", "{\"metareview\": \"The authors present an analysis to understand the mechanism of ICL. They propose a concept encoding-decoding mechanism, validated across a few models. The experiments seem well supported, but overall seem relatively incremental given existing work. Expanding this analysis to more in-depth analyses (e.g. how encoding and decoding interact / their relative speed of acquisition / their generalization) and / or to other task families would help make this stronger.\\n\\nI would strongly recommend a workshop for the paper as is, and to resubmit with a few additional analyses -- the paper is borderline and just falling short of a full conference paper.\", \"additional_comments_on_reviewer_discussion\": \"The authors added experiments to address concerns about generalizability (to models and tasks), and clarity of exposition.\\n\\nThe main remaining issues raised were about overall novelty given past work, whether the work is impactful enough -- about which all reviewers did not agree.\"}", "{\"summary\": \"This paper studies the training dynamics of in-context learning (ICL) in transformers by analyzing their representations. It demonstrates the emergence of concept encoding, where the model first encodes the latent concepts in the representation space, and decoding, where the model applies a selective algorithm. They initially show the existence of concept encoding-decoding on experiments with synthetic tasks using a smaller autoregressive model on a mixture of sparse linear regression tasks. The same concept encoding-decoding mechanism exists in the pretrained Llama-3 model, where the authors show that concept encoding holds for POS tagging and bitwise arithmetic tasks as well. The study further shows a causal link between concept decoding capabilities and ICL performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"**S1:** The paper studies an interesting area of ICL, where the authors propose a new perspective on the training dynamics of ICL by showing the existence of a two-step encoder-decoder mechanism within the transformers.\", \"**S2:** The paper is well-designed with sound experiments and with the study starting from synthetic and simpler tasks on a smaller model, and extending to the similar trends on a larger, real-world model and NLP tasks. Authors further conduct additional analysis based on model patching and fine tuning.\"], \"weaknesses\": [\"**W1:** While the paper is well written overall, the introduction does not emphasize the main contributions and it is challenging to identify the importance of key insights from the beginning. Next, the paper closes with a brief discussion but lacks a fully rounded conclusion.\", \"**W2:** Figure 1 is hard to interpret since the same markers are used for different LSE and Lasso regression. Moreover, there is also unnecessary whitespace around Figure 1. Finally, I believe there is a typo in row 377 and the text should refer to Figure 5.\", \"**W3:** Although the study evaluates both synthetic and real-world tasks, the real-world experiments are limited to a single model (Llama-3.1-8B) and two relatively simple tasks, which raises concerns about whether the concept encoding-decoding mechanism will generalize to more complex or realistic tasks and larger models. Additional experiments on diverse or harder tasks could strengthen the evidence for generalizability.\", \"**W4:** The paper shows unsurprising and expected results on Figure 3 and Figure 8. The finding that increased number of demonstrations lead to better encoding and decoding seems expected, as more examples provide more \\u201clearning\\u201d signal, which is observed for few-shot learning problems.\", \"Next, observing that some tasks fail to achieve high accuracy and Concept Decodability (CD) due to representation limitations aligns with existing research about the generalization capabilities of ICL and ICL failing in cases when the new or similar-enough task was not observed so frequently during pretraining, which is commented in the Discussion section.\", \"Finally, the observation that fine tuning the model improves CD and ICL accuracy is not unsurprising as the representation subspaces are aligned and the ICL task is now the same as the IWL task due to fine tuning.\"], \"questions\": [\"This work explores an interesting and relevant topic while providing constructive insights into ICL training. However, the authors should improve the paper structurally by having clearer contribution highlights and a more rounded conclusion paragraph. Moreover, the results concerning number of demonstrations, fine tuning and the connection between the CD and ICL accuracy come as unexpected and are hindering the paper's contributions.\", \"Given everything considered, I would be open to raise by score if the authors address the questions stated under the \\u201cWeaknesses\\u201d section and the following ones:\", \"In the synthetic experiments in section 3, layer 5 is analyzed closely. Why was this specific layer chosen, and how do observations vary across other layers?\", \"Can you please provide more details on the training setup for the synthetic experiments in section 3. Do you train 4 different models for different betas, or is it the same model? If it is the same model, how do you perform UMAP?\", \"Have you considered testing the encoding - decoding capabilities across different models and tasks to show the generality of the encoding mechanism in large language models? Can the same be observed in multi-modal models?\", \"Have you tested how the model handles more complex tasks, multi step tasks or tasks where concept overlap exists? Could you perform additional experiments to show the scenario with overlapping concepts?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
0UCoWxPhQ4
SAVA: Scalable Learning-Agnostic Data Valuation
[ "Samuel Kessler", "Tam Le", "Vu Nguyen" ]
Selecting data for training machine learning models is crucial since large, web-scraped, real datasets contain noisy artifacts that affect the quality and relevance of individual data points. These noisy artifacts will impact model performance. We formulate this problem as a data valuation task, assigning a value to data points in the training set according to how similar or dissimilar they are to a clean and curated validation set. Recently, *LAVA* (Just et al., 2023) demonstrated the use of optimal transport (OT) between a large noisy training dataset and a clean validation set, to value training data efficiently, without the dependency on model performance. However, the *LAVA* algorithm requires the entire dataset as an input, this limits its application to larger datasets. Inspired by the scalability of stochastic (gradient) approaches which carry out computations on *batches* of data points instead of the entire dataset, we analogously propose *SAVA*, a scalable variant of *LAVA* with its computation on batches of data points. Intuitively, *SAVA* follows the same scheme as *LAVA* which leverages the hierarchically defined OT for data valuation. However, while *LAVA* processes the whole dataset, *SAVA* divides the dataset into batches of data points, and carries out the OT problem computation on those batches. Moreover, our theoretical derivations on the trade-off of using entropic regularization for OT problems include refinements of prior work. We perform extensive experiments, to demonstrate that *SAVA* can scale to large datasets with millions of data points and does not trade off data valuation performance. Our Github repository is available at \url{https://github.com/skezle/sava}.
[ "Data Valuation", "Optimal Transport", "Data Selection", "Active Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=0UCoWxPhQ4
https://openreview.net/forum?id=0UCoWxPhQ4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xjWTcEzY5E", "wLdMf5SaLR", "wFnjNkSSBt", "o410PO5iwc", "nNmRD62RDH", "lzSEXBFx3u", "lZ3FTyhwCu", "im3RHMTUXZ", "hOrTHtm3G1", "cpGPMHNbvU", "XmpVwqShgW", "SGlrNNOA3j", "S2ePN70JFQ", "RakwmCKfvQ", "Olxm0zlDHn", "I1uxVBd3mZ", "H5NxDEvvWm", "GvcpS2MruG", "GAHs4yqEh8", "FkifkPy8gs", "Ec3CmQjGBC", "8NxyTQRdY9", "0QDBb4dpXb" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732755671553, 1732526190528, 1732418311628, 1732418387299, 1732177289877, 1732716158756, 1730293919305, 1732179725852, 1732185742397, 1730646041674, 1732418239749, 1737523798453, 1730689995240, 1732984777542, 1732179517220, 1732526301981, 1732526246307, 1732941814513, 1732185406103, 1732177992138, 1732177788046, 1734696186198, 1732961878439 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Reviewer_ckNn" ], [ "ICLR.cc/2025/Conference/Submission6869/Reviewer_RqVV" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Reviewer_ckNn" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6869/Reviewer_94Rm" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Reviewer_94Rm" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ], [ "ICLR.cc/2025/Conference/Submission6869/Area_Chair_EtJZ" ], [ "ICLR.cc/2025/Conference/Submission6869/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks for your endorsement!\", \"comment\": \"Thank you for your response, and we deeply appreciate your thoughtful endorsement.\", \"our_answer_for_your_questions_is_as_follow\": [\"We agree with the Reviewer that our approach based on hierarchical OT is an extension of LAVA, but respectfully, it is not trivial. Indeed, note that **a naive HOT approach is not entirely efficient since its runtime to compute HOT subgradient is not necessarily faster than the LAVA approach**. Our proposed label-to-label cost caching significantly reduces the runtime of the naive HOT approach with no detriment to performance (see line 401-406 and Figure 9). Briefly, **the proposed label-to-label cost caching is simple, but critical to make our approach based on HOT effective and efficient for data valuation**, especially for large-scale settings (where it is prohibited for the seminal LAVA).\", \"Theoretically, following OT theory, we derive the HOT subgradient (to our knowledge, it has not been done in previous HOT approaches in the OT literature). Importantly, we **correct and refine the trade-off** by using entropic regularization to approximate the computation of OT subgradient in LAVA to characterize the corresponding exact trade-off for our approach based on HOT.\", \"If our responses have addressed your concerns, we kindly ask that you consider raising your score to reflect your updated evaluation of our paper more accurately. Again, thank you very much for your time and thoughtful comments!\"]}", "{\"title\": \"Kind Reminder: Response of Author Rebuttal for Paper 6869\", \"comment\": \"Dear Reviewer RqVV,\\n\\nWe sincerely appreciate the time you have taken to provide feedback on our work, which has helped us greatly improve its clarity, among other attributes.\\n\\nThis is a gentle reminder that the discussion phase will end in less than 2.5 days from this comment, i.e., 11:59 pm AoE on November 26. We are happy to answer any further questions you may have before then. Please note that you cannot respond to us after that time, and we cannot reply to you after 11:59 pm AoE on November 27.\\n\\nIf our responses have addressed your concerns, we kindly ask that you consider raising your score to reflect your updated evaluation of our paper more accurately. Thank you again for your time and thoughtful comments!\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Any Questions from the Reviewer ckNn on Our Rebuttal?\", \"comment\": \"We would like to thank the Reviewer again for your thoughtful comments and valuable feedback.\\n\\nWe would appreciate it if you could let us know whether our responses have addressed your concerns, and whether you still have any other questions about our rebuttal. \\n\\nWe would be happy to do any follow-up discussion or address any your additional comments.\"}", "{\"title\": \"Any Questions from the Reviewer RqVV on our Rebuttal?\", \"comment\": \"We would like to thank the Reviewer again for your thoughtful comments and valuable feedback.\\n\\nWe would appreciate it if you could let us know whether our responses have addressed your concerns, and whether you still have any other questions about our rebuttal. \\n\\nWe would be happy to do any follow-up discussion or address any your additional comments.\"}", "{\"title\": \"Global response to reviews\", \"comment\": \"We express our gratitude to the Chairs and the Reviewers for spending time reviewing our paper and providing constructive feedback. We are grateful to the Reviewers for recognizing firstly the importance of the problem setting and the novelty of SAVA which scales the OT data valuation problem by using batches (94Rm, RqVV), secondly the clarity of the presentation (ckNn, RqVV), thirdly the comprehensive derivations (94Rm, RqVV), and finally the empirical analysis (94Rm, ckNn). We are also grateful for constructive critical comments, which helped us to improve the paper!\\n\\n# Paper updates\\n\\nMany thanks for your suggestions. We have made the following revisions to our paper and updated the paper on OpenReview highlighting our changes in orange.\\n\\n## Clarity of the derivations (94Rm, RqVV)\\n\\nWe have revised the captions for Fig 1 and Alg 1 to make them clearer and self-contained. We have also moved the notation table from the appendix into the main body of the paper to make it easier for the reader to familiarize themselves with the notation.\\n\\n## Time complexity (ckNn, RqVV)\\n\\nWe have added a new Section D in the appendix where we compare the time complexity of SAVA and LAVA.\\n\\n## Runtimes (94Rm, RqVV)\\n\\nGiven the suggestion from RqVV to include batch-wise LAVA to the runtime analysis in Fig 9. We have updated the plot to benchmark runtimes on the same GPU, we have updated the curves for SAVA, LAVA, KNN Shapley and SAVA with label-to-label caching (used throughout the paper). We have added the new runtime curves for batch-wise LAVA with label-to-label caching (used throughout the paper). Although there is a quadratic number of small batch-level OT problems which need to be solved, runtimes are comparable to SAVA when using label-to-label caching.\"}", "{\"title\": \"Response\", \"comment\": \"The reviewer appreciates the authors' efforts for this response. While most of my concerns/comments are addressed, I am still concerned about the novelty. Why is calculating the sub-gradient of OT a challenge? Wouldn't this be a direct extension from LAVA?\\n\\nGiven that said, however, I am willing to lift my score to 6 to acknowledge the improvement from the initially submitted version.\"}", "{\"summary\": \"This paper proposes a new learning-agnostic data valuation approach that assigns a value to each data point in the training set based on its similarity to the validation set. They introduce SAVA, a scalable variant of the LAVA algorithm, which uses optimal transport (OT) sensitivity to value training data without directly concerning model performance efficiently. Unlike LAVA requiring the entire dataset as input, SAVA operates on batches of data points, making it has a smaller memory consumption. Thus, SAVA can make the valuation taks of larger datasets possible. The authors conduct extensive experiments showing that SAVA can effectively scale to large datasets and maintain data valuation performance across various downstream tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"[S1] An interesting approach leveraging the idea of batches to solve the memory bottleneck encountered in OT solver as optimizer in model training.\\n\\n[S2] Detailed theoretical proofs and descriptions of previous work are given.\\n\\n[S3] The article is well-organized and easy to read.\", \"weaknesses\": \"[W1] My biggest concern is the proof of the upper bound does not adequately explain why this proxy can work.\\u00a0\\u00a0Detailed analysis on the upper bound of the proxy practicability should be taken.\\n\\n[W2] My second concern is that the paper lacks of time complexity analysis. And SAVA in Figure 2 seems to be no better than Batch-wise LAVA. In the appendix Figure 9, why not compare Batch-wise LAVA in running time metric? \\n\\n[W3] Typos: Line 417, \\\"Batch-wise LAVA KNN Shapley and\\\" -> \\\"Batch-wise LAVA, KNN Shapley, and\\\"\\n\\nPlacing Table 1 in Section 2 would help to improve understanding.\", \"questions\": \"pls see W2\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to review (1/n)\", \"comment\": \"**Q: [W1] My biggest concern is the proof of the upper bound does not adequately explain why this proxy can work. Detailed analysis on the upper bound of the proxy practicability should be taken.**\", \"a\": \"Indeed, we find it remarkable that it performs so well for the corrupted CIFAR10 experiments, it is an interesting result for the OT community, potentially these CIFAR10 experiments are too easy. That said, batch-wise LAVA is more sensitive to the batch size hyper-parameter, Fig 11, especially for small batch sizes due to small numbers of points in the validation batch to compare to, and with large batch sizes since the final batch might be small too. Crucially, batch-wise LAVA underperforms SAVA for the large scale Clothing1M experiment.\"}", "{\"title\": \"Response to review (2/n) n=2\", \"comment\": \"**Q: What happens if the validation dataset gets corrupted?**\", \"a\": \"We observe an increase in performance when comparing test performance after training on a pruned dataset (10%-40% of the data) versus training on the entire dataset (0% pruning). The optimal pruning percentage seems to be around 20%-30%. For more aggressive pruning, 40%, we start to see performance decrease (but still better than 0% pruning, no pruning) as data that is not noisy is starting to be removed from the dataset. At some pruning percentage we can expect performance starting to deteriorate so it is not surprising to see performance dropping after a certain amount of pruning. Since this is a large non-curated dataset, it is difficult to gauge the exact percentage of truly noisy data points to gauge a \\u201ccorrect\\u201d pruning percentage.\"}", "{\"summary\": \"This paper develops a variant of LAVA, called SAVA, for scalable data valuation. The idea is perform data valuation on batches of data instead of on the entire dataset. Extensive numerical results are presented to demonstrate SAVA's efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The experimental results are convincing. The authors compared to SOTA methods for data valuation across various data corruption scenarios. The results demonstrate that SAVA is scalable to large datasets. Also, the results included a dataset of size larger than 1 million samples, in which the proposed method outperforms benchmarks.\", \"The writing is good and easy to follow.\"], \"weaknesses\": [\"The reviewer's biggest concern is related to novelty. Currently, SAVA seems a very natural extension of LAVA for data valuation on batches. The submission seems to be on the incremental side, unless the authors can clearly state the technical challenge when calculating on batches.\", \"The choice of batch size is a key hyper-parameter in SAVA (and key difference to LAVA). The authors are suggested to include formal theoretical analysis to quantify the tradeoff in choosing batch size between memory and calculation approximation. Also, Appendix G should appear in the main text.\", \"The authors are suggested to include a table comparing the complexities of LAVA and SAVA.\", \"What happens if the validation dataset gets corrupted?\", \"In Fig. 3, why is the performance of SAVA dropping at .4 proportion?\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Any Questions from the Reviewer 94Rm on Our Rebuttal?\", \"comment\": \"We would like to thank the Reviewer again for your thoughtful comments and valuable feedback.\\n\\nWe would appreciate it if you could let us know whether our responses have addressed your concerns, and whether you still have any other questions about our rebuttal. \\n\\nWe would be happy to do any follow-up discussion or address any your additional comments.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper investigates of the problem of extending Optimal Transport (OT) distance-based data valuation methods for larger scale problems. The paper points out that for current methods, the quadratic overhead for expensive GPU memory constrained the scale of problems they can be applied to. Correspondingly, this paper proposes to compute the OT problem in a batch-wise fashion where the batch-wise results are aggregated via an hierarchical OT framework to produce data point valuations. This approach allows converting intractable large-scale OT problems into a series of smaller problems that can be solved efficiently. Empirical results on a variety of tasks show the proposed approach achieves competitive performance compared to original methods while being applicable to larger-scale problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The problem is well-contextualized and the motivation is clear. Structure of the paper is well balanced and the elaborations are coherent. It is straightforward for readers to understand the scope and target of the paper and the proposed technical approaches.\\n\\nThe proposed method is plausible, leveraging the hierarhical OT framework to aggregate results from batch-wise OT computations and achieving favorable approximation results.\\n\\nDerivations are comprehensive and are paired with substantial elaborations. Empirical evaluations are diverse and abundant and the results are valid.\", \"weaknesses\": \"I am still somewhat concerned about the computation overhead for SAVA. Even it avoids directly solving large-scale OT problems and circumvents OOM issues, it now requires solving a quadratic number of OT problems between every pair of batches and aggregating their results. This could also take a significant amount of time if the number of batches are high.\\n\\nAre there results on actual time comparisons for the methods in empirical studies?\\n\\nThe structure of the paper still has room to improve. The current layout is dense where there are many equations and lemmas interleaved with elaborations. There's an overhead for the readers to familiarize with the notations before being able to catch up with the ideas. It could be made more straightforward.\\n\\nFor example, the crucial Figure 1 and Algorithm 1 are not self-contained. Many of the involved notations are not straightforward and also not introduced in the captions. It still requires readers to first read through the texts and derivations to understand what is being done. Strongly suggests authors to make an effort to improve these visualizations, which could substantially improve the paper's accessibility and impact.\", \"questions\": \"Other than hierarchical OT and the proposed implementation, there are some other ideas for mitigating OT efficiency issues.\\n\\nSome standard approaches include low-rank approximation to the transportation matrix C, which is often possible for practical cases. This allows representing the large matrix C with multiplication of smaller matrices and avoids directly materilizing the large matrix C and OOM issues. \\n\\nAnother somewhat connected idea is to directly quantize the train and validation distributions (e.g., approximate the distributions via downsampling) to simplify the OT problem.\\n\\nHierarchical OT can also be conducted with clustering methods. For example, at the lower level, group all the samples into a number of clusters, and at the higher level, solve the OT problem between the centroids of clusters. \\n\\nIt will be very interesting to see how to connect the proposed framework to these ideas and whether they may help further improving the computation complexity or accuracy.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Final Reminder: Response of Author Rebuttal for Paper 6869\", \"comment\": \"Dear Reviewer RqVV ,\\n\\nWe appreciate the time you have taken to provide feedback on our work.\\n\\nThis is a final reminder that the discussion phase will end soon, on Monday 2 December. We would be more than happy to answer any further questions you may have before then.\\n\\nIf our responses have addressed your concerns, we kindly ask that you consider revising your score to reflect these clarifications. Thank you again for your time and thoughtful comments!\\n\\nSincerely,\\n\\nThe Authors of Paper 6869\"}", "{\"title\": \"Response to review (1/n)\", \"comment\": \"**Q: The reviewer's biggest concern is related to novelty. Currently, SAVA seems a very natural extension of LAVA for data valuation on batches. The submission seems to be on the incremental side, unless the authors can clearly state the technical challenge when calculating on batches.**\", \"a\": [\"Thank you for the suggestion, we have included a table in a revised version of our manuscript with the following time complexities in Section D of the appendix.\", \"For simplicity, we assume $N \\\\ge N'$, then Sinkhorn time complexity is $O(N \\\\times N\\u2019 \\\\times \\\\log(N))$ [Dvurechensky 2018].\", \"Using the notation from the paper:\", \"$N$ is the size of the training set.\", \"$N\\u2019$ the size of the validation set.\", \"$V$ is the number of classes.\", \"$K_t$ the number of training batches.\", \"$K_v$ the number of validation batches.\", \"$N_{y_i}$ the number of instances per class $y_i$ such that $N = \\\\sum_i N_{y_i}$.\", \"$N_{i_{y_i}}$ the number of instances per class $y_i$ per batch.\", \"$\\\\mathbf{V}, \\\\mathbf{N}$, $\\\\mathbf{N_\\\\mathbf{y_j}}$, $\\\\mathbf{N_{i_{y_i}}}$ is used to denote the respective validation dataset quantities. In the paper we use the prime notation $V\\u2019, N', N'_{y'_j}$, but in OpenReview we use the bold notation due to Latex rendering issues.\", \"For LAVA, we compute class-wise Wasserstein distance and solve $V \\\\times V'$ OT problems with the complexity $\\\\tilde{\\\\mathcal{O}}(N_{y_i} \\\\times \\\\mathbf{N_\\\\mathbf{y_j}} \\\\times \\\\log N_{y_i}$). Totally, the complexity is $\\\\sum_{i=1}^{V} \\\\sum_{j=1}^{\\\\mathbf{V}} \\\\tilde{\\\\mathcal{O}}(N_{y_i} \\\\times\\\\mathbf{N_\\\\mathbf{y_j}} \\\\times \\\\log N_{y_i})$. Additionally, we also need to solve OT between two datasets with complexity $\\\\tilde{\\\\mathcal{O}}(N \\\\times \\\\mathbf{N} \\\\times \\\\log N)$.\", \"For SAVA, with the label-to-label caching implementation, for the classwise Wasserstein distance, the total complexity is $\\\\sum_{i=1}^V \\\\sum_{j=1}^{\\\\mathbf{V}} \\\\tilde{\\\\mathcal{O}}(N_{i_{y_i}} \\\\times \\\\mathbf{N_{i_{y_i}}} \\\\times N_{i_{y_i}})$ where for each class $y_i, \\\\mathbf{y_j}$, we sample $N_{i_{y_i}}, \\\\mathbf{N_{i_{y_i}}}$ respectively for these classes. Let $N_{i}, \\\\mathbf{N_{j}}$ be the number of samples in batches and $K_t, K_v$ be the number of batches respectively, the total complexity to solve all batch level OT problems is $\\\\sum_{i=1}^{K_v} \\\\sum_{j=1}^{K_t} \\\\tilde{\\\\mathcal{O}}(N_i \\\\times \\\\mathbf{N_{j}} \\\\times \\\\log N_i)$. Additionally, we need to solve one more OT problem between batches $\\\\tilde{\\\\mathcal{O}}(K_v \\\\times K_t \\\\times \\\\log K_v)$.\", \"[Dvurechensky 2018] Dvurechensky, Pavel, Alexander Gasnikov, and Alexey Kroshnin. \\\"Computational optimal transport: Complexity by accelerated gradient descent is better than by Sinkhorn\\u2019s algorithm.\\\" International conference on machine learning. PMLR, 2018.\"]}", "{\"title\": \"Kind Reminder: Response of Author Rebuttal for Paper 6869\", \"comment\": \"Dear Reviewer 94Rm,\\n\\nWe sincerely appreciate the time you have taken to provide feedback on our work, which has helped us greatly improve its clarity, among other attributes.\\n\\nThis is a gentle reminder that the discussion phase will end in less than 2.5 days from this comment, i.e., 11:59 pm AoE on November 26. We are happy to answer any further questions you may have before then. Please note that you cannot respond to us after that time, and we cannot reply to you after 11:59 pm AoE on November 27.\\n\\nIf our responses have addressed your concerns, we kindly ask that you consider raising your score to reflect your updated evaluation of our paper more accurately. Thank you again for your time and thoughtful comments!\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Kind Reminder: Response of Author Rebuttal for Paper 6869\", \"comment\": \"Dear Reviewer ckNn,\\n\\nWe sincerely appreciate the time you have taken to provide feedback on our work, which has helped us greatly improve its clarity, among other attributes.\\n\\nThis is a gentle reminder that the discussion phase will end in less than 2.5 days from this comment, i.e., 11:59 pm AoE on November 26. We are happy to answer any further questions you may have before then. Please note that you cannot respond to us after that time, and we cannot reply to you after 11:59 pm AoE on November 27.\\n\\nIf our responses have addressed your concerns, we kindly ask that you consider raising your score to reflect your updated evaluation of our paper more accurately. Thank you again for your time and thoughtful comments!\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"I thank the authors for the responses. I have carefully read through the rebuttals, revisions in the manuscript, and other reviewers & responses. I remain positive about this work and recommend it for publishing. Specifically, the discussions in this rebuttal are very insightful and informative. I recommend the authors add them to the paper or its appendix.\\n\\nBased on the scores and other reviews, I believe this paper is headed to being accepted. **I wish to give it a score of 7, Clear Accept, but this option isn't available.** The reason I'm hesitant about giving it a score of 8 is I feel its presentation can still be improved. The current paper is technically solid with a valid contribution, but I'm concerned whether it could effectively reach out to a broader audience beyond the data valuation community. I hope the authors can continue to improve it for maximum impact.\\n\\n**I raised my score to 8. Should the AC hesitate about whether to accept the paper for its borderline scores, I'm willing to advocate for its acceptance.**\"}", "{\"title\": \"Response to review (2/n) n =2\", \"comment\": \"**Q: [W2] In the appendix Figure 9, why not compare Batch-wise LAVA in running time metric?**\\n\\nWe have updated Figure 9 to include Batch-wise LAVA in the run-time analysis. We have updated all methods to ensure we benchmark on the same hardware. Our implementation of Batch-wise LAVA also uses label-to-label caching and so is as performant as SAVA with label-to-label caching although SAVA requires solving a final OT problem in line 6 of Alg 1. Using label-to-label caching is essential to reduce runtimes by an order of magnitude for SAVA and matching LAVA runtimes. \\n\\n**Q: [W3] Typos: Line 417, \\\"Batch-wise LAVA KNN Shapley and\\\" -> \\\"Batch-wise LAVA, KNN Shapley, and\\\"**\", \"a\": \"We have placed Table 1 in the main body of the paper to improve readability.\"}", "{\"title\": \"Response to review (2/n) n=2\", \"comment\": \"**Q: Some standard approaches include low-rank approximation to the transportation matrix C, which is often possible for practical cases. This allows representing the large matrix C with multiplication of smaller matrices and avoids directly materilizing the large matrix C and OOM issues.**\", \"a\": \"In case one uses clustering to quantize input labelled data points into clusters, then consider datasets as measures over clusters (via the cluster centroids), and only compute the OT between those measures over clusters for data valuation. Consequently, one can only value the clusters (or more precisely, the cluster centroids), but not the input labelled data points anymore.\\n\\nAdditionally, we agree that clustering is an alternative approach to partition input labelled data points into batches instead of using the random partitions for HOT, but it comes with the extra cost of clustering. Empirically, we observe that HOT works well with random partitions into batches. It would be interesting to reconcile clustering into the proposed LAVA and SAVA frameworks and see whether it helps to further improve the performances, with the extra cost from clustering to partition labelled data points into batches. Therefore, we leave this trade-off on using clustering for HOT for future investigation. \\n\\nWe further note that for some clustering methods, e.g., K-means clustering method, we would be required to compute the means for **labelled** data points, which is nontrivial and may be very costly. More precisely, we need to compute the mean for labelled data points $(x_i, y_i)$ w.r.t. the label-feature distance in Eq. (1).\"}", "{\"title\": \"Response to review (1/n)\", \"comment\": \"**Q: I am still somewhat concerned about the computation overhead for SAVA. Even it avoids directly solving large-scale OT problems and circumvents OOM issues, it now requires solving a quadratic number of OT problems between every pair of batches and aggregating their results. This could also take a significant amount of time if the number of batches are high.**\\n\\n**Are there results on actual time comparisons for the methods in empirical studies?**\", \"a\": \"We discuss this in lines 207-212. For instance, sliced-Wasserstein (Rabin et al., 2011) or Sobolev transport (Le et al., 2022) are good approaches for approximating the OT distance, but for data valuation, we require the (sub)gradient of the OT w.r.t. to a mass of input support. We also need to solve the dual formulation of the OT for the optimal dual variables, which may not be obtainable for sliced-Wasserstein (while Sobolev transport limits for input support data points on a given graph structure). That is why we opt for hierarchical OT.\"}", "{\"metareview\": \"This paper introduces SAVA, a new and more scalable variant of the data valuation method LAVA. LAVA was restricted by the amount of computation required when computing the optimal transport distance. SAVA addresses this problem by computing these metrics in a batch manner. Experiment results successfully showed the promises of the approach.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers are unanimous about the paper's contribution.\"}", "{\"title\": \"Thank you for your endorsement!\", \"comment\": \"Many thanks for your response, and we deeply appreciate your thoughtful endorsement.\\n\\nWe will revise the paper following the feedback and suggestion in the rebuttal discussion.\\n\\nWith best regards,\"}" ] }
0UCkWfcfb9
OPTune: Efficient Online Preference Tuning
[ "Lichang Chen", "Jiuhai Chen", "Chenxi Liu", "John Kirchenbauer", "Davit Soselia", "Chen Zhu", "Tom Goldstein", "Tianyi Zhou", "Heng Huang" ]
Reinforcement learning with human feedback~(RLHF) is critical for aligning Large Language Models (LLMs) with human preference. Compared to the widely studied offline version of RLHF, \emph{e.g.} direct preference optimization (DPO), recent works have shown that the online variants achieve even better alignment. However, online alignment requires on-the-fly generation of new training data, which is costly, hard to parallelize, and suffers from varying quality and utility. In this paper, we propose a more efficient data exploration strategy for online preference tuning, OPTune, which does not rely on human-curated or pre-collected teacher responses but dynamically samples informative responses for on-policy preference alignment. During data generation, OPTune only selects prompts whose (re)generated responses can potentially provide more informative and higher-quality training signals than the existing responses. In the training objective, OPTune reweights each generated response (pair) by its utility in improving the alignment so that learning can be focused on the most helpful samples. Throughout our evaluations, OPTune'd LLMs maintain the instruction-following benefits provided by standard preference tuning whilst enjoying 1.27-1.56x faster training speed due to the efficient data exploration strategy.
[ "Efficient RLHF; Online DPO;" ]
https://openreview.net/pdf?id=0UCkWfcfb9
https://openreview.net/forum?id=0UCkWfcfb9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tWCkMhxGQQ", "ZTUPRCGmf0", "UjQmaDha27", "Qtt9fQzYb2", "N1p4rMhMJm", "FDKZapZlCj" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730676756129, 1730718241771, 1730635321166, 1732578508795, 1730670401640, 1730404068575 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6048/Reviewer_t2sx" ], [ "ICLR.cc/2025/Conference/Submission6048/Reviewer_NRtG" ], [ "ICLR.cc/2025/Conference/Submission6048/Reviewer_8QcR" ], [ "ICLR.cc/2025/Conference/Submission6048/Authors" ], [ "ICLR.cc/2025/Conference/Submission6048/Reviewer_Vdwc" ], [ "ICLR.cc/2025/Conference/Submission6048/Reviewer_mgRJ" ] ], "structured_content_str": [ "{\"summary\": \"This paper propose OPTune, an approach to enhance the both generation and training efficiency of online preference tuning for LLMs alignment.\\nTo improve the generation efficiency, OPTune selects prompts whose regenerated responses are likely to provide more informative and higher-quality training signals.\\nIn addition, weighted DPO is proposed to improve the training efficiency by modelling the reward gap of response pairs.\\nEmpirical results show that LLMs tuned with OPTune maintain instruction-following benefits and achieve faster training speeds compared to standard preference tuning methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and well-orginized.\\n\\nConsidering online DPO takes more time than the original offline method, improving its efficiency is of great significance.\", \"weaknesses\": \"Given that iterative DPO often utilizes different prompts in different iterations [1] for avoid overfitting or overoptimization [2], it is not clear how the proposed method can be used in such scenarios.\\n\\nThe performance of the models corresponding to different selection ratios in Table 2 is not very different and is generally low, which cannot explain the effectiveness of the method.\", \"references\": \"[1] Meng Y, Xia M, Chen D. Simpo: Simple preference optimization with a reference-free reward[C]. NeurIPS, 2024.\\n\\n[2] Rafailov R, Chittepu Y, Park R, et al. Scaling laws for reward model overoptimization in direct alignment algorithms[J]. arXiv preprint arXiv:2406.02900, 2024.\", \"questions\": \"How to rank the response pairs? Do you use the average rewards of preferred and less preferred responses? Is there a better prompt selection method suitable for wDPO?\", \"regarding_the_experimental_configuration_in_table_1\": \"How many responses were generated for each prompt?\\n\\nDid the author observe overoptimization reusing the same prompts for each iterations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper targets LLM alignment with human preferences in an online manner. OPTune involves two main strategies to reduce computational costs while maintaining alignment quality, including selective generation and weighted DPO loss. The authors conduct experiments using OPTUNE with LLMs and report a 1.27\\u20131.56x speedup in training while maintaining or even improving model alignment.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. OPTune achieves notable computational savings in data generation and training, reducing costs for online RLHF while preserving alignment quality.\\n2. By focusing on low-reward prompts, OPTune avoids unnecessary regeneration, which is a pragmatic approach to improve efficiency.\\n3. Using weighted DPO loss changes binary signals to dense signals, improving improving alignment through prioritizing high-utility samples.\", \"weaknesses\": \"1. The choice of the ratio of re-generated prompts $\\\\rho$ can be a key factor of OPTune. Though the authors conduct experiments with different $\\\\rho$s, the authors do not provide direct insights on how to choose $\\\\rho$ to balance between efficiency and performance.\\n2. Online DPO (without weighted loss) should be the most related baseline for this paper. Though some experiments are conducted, the authors do not sufficiently evaluate OPTune's superiority over online DPO.\\n3. In Table 3, the performance in TrustfulQA is incorrectly bold. The offline DPO model has higher performance.\\n4. The choice of $\\\\beta_2$ in the weighted loss is significant while the authors do not reveal any insight or related experiments on it.\", \"questions\": \"1. A choice of small $\\\\rho$ can speed up the training process. However, we may improve the efficiency by directly reduce the training epochs while enlarging the learning rate, which may bring more significant speedups. Will an online training method with dynamic learning rate adjustment have better efficiency?\\n2. Samples whose reward gap between positive and negative responses is high may dominate the learning loss. Does the training curve show more significant instability than DPO?\\n3. Will the un-re-generated responses be over-optimized?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the high-cost issue of online alignment. It proposes a method aimed at improving the efficiency of online alignment, specifically consisting of two parts. First, only the lowest-rewarded responses generated under the latest LLM policy are regenerated and updated. Second, the loss function is modified to assign higher weights to response pairs that contribute more during training.\\nSimilar methods to the two improvements proposed in this paper have already emerged within the community. Further, the experimental section lacks a proper evaluation of the improvements due to the choice of an outdated and subpar baseline Zephyr 7B Beta (Alpaca eval rank 131).\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The writing is relatively clear.\", \"weaknesses\": \"1. Lack of Innovation\\nOver the past year, the alignment community has proposed numerous methods similar to those used in this paper. As early as the Llama 2 Technical Report, the approach of directly incorporating the score difference between two responses into the loss function was introduced. Although the Llama 2 Technical Report is cited in the Related Work section, there is no comparative discussion with Llama 2 or other similar works in Section 3.2.\\n2. Incomplete Experiments and Lack of Analysis\\nThis paper introduces a scaling factor, beta 2, to amplify the score difference and combines it with the original DPO loss function via multiplication rather than addition. However, the motivation behind this approach is not explained. More importantly, there is no ablation study to compare the impact of different values for the scaling factor beta 2 or other ways of incorporating score differences.\\n3. Unconvincing Choice of Baseline\\nThe entire experimental section only includes the Zephyr model as a baseline, with no comparisons to other baselines. \\nCurrently, on the alpaca_eval board(https://tatsu-lab.github.io/alpaca_eval/) Zephyr-7B-Beta has a win rate of 13.2%, ranking 131st. When controlling for model size and listing only models with 8B or fewer parameters, there are other fine-tuned models based on comparable foundation models. These include the Gemma series (e.g., Gemma-2-9B-it-SimPO, rank 8; Gemma-2-9B-it-DPO, rank 10), Llama3-based fine-tuned models like Llama-3-Instruct-8B-WPO-HB-v2 (rank 20), and Mistral 7B-based models like Storm-7B (rank 24). Without comparisons to any of these other similarly sized fine-tuned models, the paper\\u2019s conclusions are difficult to accept.\\n\\nLongitudinally, the Zephyr model has several later versions, including FsfairX-Zephyr-Chat-v0.1 (rank 50, LC win rate 34.8%), ExPO + Zephyr 7B Beta (rank 128, LC win rate 14.0%), and Zephyr 7B Beta (rank 131, LC win rate 13.2%). The paper only selected Zephyr 7B Beta, which ranks last among these, as its baseline, with a win rate only 40% of the current best Zephyr model. Additionally, instead of using the common win rate metric, the paper employs win score, making it difficult to directly compare the performance of Optune against existing models.\", \"questions\": \"Why didn\\u2019t the authors choose a stronger model from the Zephyr series as a baseline, or conduct comparisons with other models like Llama-3-Instruct-8B-WPO-HB-v2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The authors propose a more generation-efficient procedure for online RLHF by preferentially sampling responses from prompts that had low rewards and weighting samples by the reward gap in the online DPO loss.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Show across multiple experiments that the proposed strategy outperforms a random selection strategy.\"], \"weaknesses\": [\"Lack of relevant baselines on sample selection: a pretty common strategy in RLHF is to pick prompts that had the largest \\\"margin\\\" between the winner and the loser for further training (e.g. https://arxiv.org/abs/2404.03715). Could you compare your strategy against this technique?\", \"Lack of relevant baselines on policy optimization: a variety of papers have already noted that IPO / DPO ignore the gap in reward between the winning and losing completions. Could you compare against at least one of these (e.g. REBEL: https://arxiv.org/abs/2404.16767).\"], \"questions\": \"1. Would it be possible to provide some sort of conceptual grounding for your proposed prompt selection strategy? I could imagine a connection to the pessimism principle in RL.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes two improvement for online preference learning: one is reward-based prompt selection, and the other one is weighted DPO. For reward-based prompt selection, the paper proposes that between each iteration of online DPO, one should only regenerate certain proportion of the reponses to the prompts that have the lowest rewards. For weighted DPO, the paper proposes to add a weight term in the DPO loss based on the reward difference in each pair of generation. The paper performs experiments to show that, with reward-based prompt selection, for both DPO and wDPO loss, selecting the right portion of regeneration will improve the efficiency of online DPO without sacraficing the performance.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The experiment is performed on a 7B model and the result suggests that with a right ratio for regeneration, the proposed method indeed improves training efficiency without decreasing the performance.\\n2. The experiment has reasonable comparison with random subselection and shows that random subselection does not work as well as the proposed method.\", \"weaknesses\": \"1. It is unclear to me that ranking the prompts by absolute reward makes sense, especially if the reward model is trained by BT loss. For each fixed prompt, the BT loss only cares about the difference between two responses, so difference prompts may induce a difference biased of the corresponding completion. Thus having a low reward does not necessarily mean that the model is currently performing bad on the prompt. Honestly I might be describing the procedure wrong because I don't see a clear definition of \\\"ranking prompt by reward\\\" unless I am missing something.\\n\\n2. Also it is unclear to me why the wDPO loss makes sense. If the reward gap between two generations are large, it is likely that the pair is already easy for the model, and the term might not even contribute to the training - why not doing the inverse weight?\\n\\n3. There is no information how table 1 is generated, and it seems like it is the major motivation for the proposed method. More details should be provided, especially to show that all procedures are fully optimized - for example, for generation it seems that using vllm to speed up the inference is the common approach.\\n\\n4. Frankly, the paper lacks basic rigors. \\n- In section 3.1, the important concept of \\\"reward gain\\\" is not defined so the motivation part is very confusing.\\n- In line 7 of alg 1, the prompt $x^i$ is already popped, then from line 12 should we never see the recently added pairs in $\\\\mathcal{R}_t$?\\n- In line 21 of alg 1, how is the ranking computed?\\n- nits: a) in eq (2), the two terms inside KL are not distribution. 2) eq (2) uses $\\\\alpha$ and the following uses $\\\\beta$. 3) in line 167 $\\\\mathcal{P}$ is not defined.\", \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
0Th6bCZwKt
Gaussian Mixture Models Based Augmentation Enhances GNN Generalization
[ "Yassine ABBAHADDOU", "Fragkiskos D. Malliaros", "Johannes F. Lutzeyer", "Amine M. Aboussalah", "Michalis Vazirgiannis" ]
Graph Neural Networks (GNNs) have shown great promise in many learning tasks, notably including node and graph classification, but they face difficulties when tested on new or unseen data. These challenges are exacerbated when training data is limited in size or diversity. To address this issue, we introduce a theoretical framework using Rademacher complexity to compute a regret bound on the generalization error and then characterize the effect of data augmentation. This framework informs the design of GMM-GDA, a new, efficient graph data augmentation (GDA) algorithm leveraging the capability of Gaussian Mixture Models (GMMs) to approximate any distribution. Our approach not only outperforms existing augmentation techniques but also offers improved time complexity, making it highly suitable for real-world applications.
[ "Graph Neural Networks", "Data Augmentation" ]
Reject
https://openreview.net/pdf?id=0Th6bCZwKt
https://openreview.net/forum?id=0Th6bCZwKt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ysWqHTgzfA", "v3O0BgpqBH", "msvA8Mhmqp", "l7ovPaOTaq", "iuIKO7tP7M", "fqP6hZvO4k", "fahkSCRS5W", "Y49RZV1E4b", "TxUA7TTm6t", "TITnDHDdLl", "RmbysKdHJJ", "RZZZ3IebFW", "Mc9ra1peV9", "FRNAEfpL1m", "8G52JFP6bJ", "51d6fcNsVd", "4DnydbZafD", "1f58e0xVDF", "07R5oXjWIC" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "decision", "official_comment", "official_comment" ], "note_created": [ 1732359015080, 1732689200564, 1730812203040, 1732797284922, 1732358947579, 1732359888189, 1732311033506, 1730687259127, 1732797248150, 1732359855961, 1733087902354, 1732797106518, 1730018607026, 1732651162760, 1735036983403, 1730835530728, 1737524165488, 1732307284572, 1733088207212 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12090/Authors" ], [ "ICLR.cc/2025/Conference/Submission12090/Reviewer_qdD6" ], [ "ICLR.cc/2025/Conference/Submission12090/Reviewer_25Li" ], [ "ICLR.cc/2025/Conference/Submission12090/Authors" ], [ "ICLR.cc/2025/Conference/Submission12090/Authors" ], [ "ICLR.cc/2025/Conference/Submission12090/Authors" ], [ "ICLR.cc/2025/Conference/Submission12090/Authors" ], [ "ICLR.cc/2025/Conference/Submission12090/Reviewer_YuEk" ], [ "ICLR.cc/2025/Conference/Submission12090/Authors" ], [ "ICLR.cc/2025/Conference/Submission12090/Authors" ], [ "ICLR.cc/2025/Conference/Submission12090/Authors" ], [ "ICLR.cc/2025/Conference/Submission12090/Authors" ], [ "ICLR.cc/2025/Conference/Submission12090/Reviewer_qdD6" ], [ "ICLR.cc/2025/Conference/Submission12090/Reviewer_25Li" ], [ "ICLR.cc/2025/Conference/Submission12090/Area_Chair_18dn" ], [ "ICLR.cc/2025/Conference/Submission12090/Reviewer_7B8K" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12090/Authors" ], [ "ICLR.cc/2025/Conference/Submission12090/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 25Li (2/2)\", \"comment\": \"***[Question 2] Distribution invariance in GMM-GDA***\\n\\nThe GMM-based sample generation method ensures that the augmented samples remain within the distribution of the original data. This is because Gaussian Mixture Models (GMMs) are universal approximators, meaning they can approximate any probability distribution given a sufficient number of components. By learning the original data distribution with a GMM, the augmented samples generated from this model are drawn from a distribution that is very close to the original data distribution, preserving its key characteristics.\\n \\nFurthermore, the exponential decay in the Gaussian components of the GMM reduces the likelihood of sampling graph representations far from the learned distribution. This ensures that augmented samples stay consistent with the original data distribution.\\n\\n\\n***[Question 3] Extension of the approach to other graph learning tasks***\\n\\nUnlike other baseline approaches, GMM-GDA is adaptable to data augmentation for tasks like node classification. The extension follows a similar framework, but here the focus shifts to learning the distribution at the node level rather than the graph level: 1/ We train the GNN on the node classification task. 2/ We use a Gaussian Mixture Model (GMM) to learn the distribution of node representations within the same class. 3/ We sample new node representations from the learned GMMs to augment the data.\\n\\nHowever, the method cannot be directly extended to the link prediction task. GMM-GDA is designed to learn and sample representations based on a specific feature space, either at the node or graph level, while for link prediction, the task depends on the pairwise relationships between node representations.\\n\\n***[Comment] The Upper Bound in Theorem 3.1***\\n\\nIt is true that if the maximum of the expectations $ \\\\max_{n \\\\in \\\\{1, \\\\ldots,N\\\\}} \\\\mathbb{E}_{\\\\lambda \\\\sim \\\\mathcal{P}} \\\\left [ \\\\left \\\\| \\\\mathcal{G}_n^\\\\lambda -\\\\mathcal{G}_n \\\\right \\\\| \\\\right ]$, is non-zero, the Rademacher complexity \\n\\nof $\\\\ell_{aug}$ may not necessarily be smaller than the Rademacher complexity of $\\\\ell$. However, by minimizing the term $ \\\\mathbb{E}_{\\\\lambda \\\\sim \\\\mathcal{P}} \\\\left [ \\\\left \\\\| \\\\mathcal{G}_n^\\\\lambda -\\\\mathcal{G}_n \\\\right \\\\| \\\\right ]$], \\n\\nwe are more likely to reduce the additional Rademacher complexity, thus increasing the chances of achieving a lower overall Rademacher complexity. Since we use Gaussian Mixture Models (GMMs) for augmenting the graph data, which are universal approximators, we ensure that the expectations $ \\\\max_{n \\\\in \\\\{1, \\\\ldots,N\\\\}} \\\\mathbb{E}_{\\\\lambda \\\\sim \\\\mathcal{P}} \\\\left [ \\\\left \\\\| \\\\mathcal{G}_n^\\\\lambda -\\\\mathcal{G}_n \\\\right \\\\| \\\\right ]$ approach zero.\"}", "{\"title\": \"Reply to the Authors\", \"comment\": \"Thanks the authors for their detailed responses, which have addressed mostly my concerns on the correctness of the theoretic results in this work. I have increased my score to 6.\"}", "{\"summary\": \"In this paper the authors introduce a method to perform data augmentation algorithm for graph datasets.\\nThe algorithm leverages Gaussian Mixture Models (GMM) to find the maximum likelihood estimates for each cluster given by the embedding ofdifferent classes. Finlly they use the GMM to generate augmented data. Notice that augmented samples are generated directly in. the embedding space. The authors provide a bound for the Rademacher complexity fro the loss function modified to account for augmented data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The proposed approach introduces a novel method for graph data augmentation.\", \"The problem studied is relevant and interesting\", \"The algorithm's time complexity is analyzed.\"], \"weaknesses\": [\"The clarity of the paper, particularly in Section 3, could be enhanced to guide the reader more effectively through the discussion.\", \"The method requires pre-training of the model and moreover is dependent on the specific model architecture, meaning that the augmented dataset cannot be used by other GNN models.\", \"Baselines in Table 1 could be expanded to include additional augmentation techniques, such as edge insertion and feature drop.\", \"The metric used to measure the distance between $\\\\mathcal{G}^{\\\\lambda}$ and $\\\\mathcal{G}$ in Theorem 3.1 and subsequent sections is not clearly defined.\"], \"questions\": [\"In the case of GMM what does the parameter $\\\\lambda$ represent ?\", \"Does the GMM-based sample generation method ensure that the augmented samples remain within the distribution of the original data?\", \"Can the method be extended to different tasks on graphs? such as node classification and link prediction?\"], \"comment\": \"If the maximum of the expectations $\\\\mathbb{E}_{\\\\lambda}[\\\\mathcal{G}_n^{\\\\lambda} - \\\\mathcal{G}n]$ is non-zero, the Rademacher complexity of $\\\\ell{aug}$ may not necessarily be smaller than the Rademacher complexity of $\\\\ell$.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 25Li (3/3)\", \"comment\": \"***Table 1: Ablation Study GIN***\\n\\n\\n| Model | IMDB-BINARY | IMDB-MULTI | MUTAG | PROTEINS | DD |\\n|------------|-----------------------|-----------------------|-----------------------|-----------------------|-----------------------|\\n| GMM w/ EM | ***71.70 (4.24)*** | ***49.20 (2.06)*** | ***88.83 (5.02)*** | ***71.33 (5.04)*** | ***68.61 (4.62)*** |\\n| GMM w/ VBI | 71.40 (2.65) | 47.80 (2.22) | 88.30 (5.19) | 70.25 (4.65) | 67.82 (4.96) |\\n| KDE | 69.10 (3.93) | 41.46 (3.02) | 77.60 (6.83) | 60.37 (3.04) | 67.48 (6.18) |\\n| Copula | 70.60 (2.61) | 47.60 (2.29) | 88.30 (5.19) | 70.16 (4.55) | 67.91 (4.90) |\\n| GAN | 70.50 (3.80) | 48.40 (1.71) | ***88.83 (5.02)*** | ***71.33 (5.55)*** | 67.74 (4.82) |\\n\\n\\n\\n***Table 2: Ablation Study GCN***\\n\\n| Model | IMDB-BINARY | IMDB-MULTI | MUTAG | PROTEINS | DD |\\n|------------|------------------------|-----------------------|-----------------------|-----------------------|-----------------------|\\n| GMM w/ EM | ***71.00 (4.40)*** | ***49.82 (4.26)*** | ***76.05 (6.47)*** | ***70.97 (5.07)*** | ***71.90 (2.81)*** |\\n| GMM w/ VBI | ***71.00 (4.21)*** | 49.53 (4.26) | ***76.05 (6.47)*** | ***70.97 (4.52)*** | 71.64 (2.90) |\\n| KDE | 55.90 (10.29) | 39.53 (2.87) | 66.64 (6.79) | 59.56 (2.62) | 58.66 (3.97) |\\n| Copula | 69.80 (4.04) | 47.13 (3.45) | 74.44 (6.26) | 65.04 (3.37) | 65.70 (3.04) |\\n| GAN | 70.60 (3.41) | 48.80 (5.51) | 75.52 (4.96) | 69.98 (5.46) | 66.26 (3.72) |\\n\\n\\nWe compare these approaches for both GCN and GIN in the table above. As noticed, GMM with EM consistently outperforms the alternative methods across most datasets in terms of accuracy. The VBI method, an alternative approach for estimating GMM parameters, yields comparable \\nperformance to the EM algorithm. This consistency across datasets highlights the effectiveness and robustness of GMMs in capturing the underlying data distribution.\\n\\nIn certain cases, particularly with the GIN model, we observed competitive performance from the GAN approach, which, unlike GMM, requires additional training. Hence, GMMs provide a more straightforward and efficient solution.\\n\\nWe have included this detailed ablation study in the new version of the manuscript, c.f. Appendix D. \\n\\n[2] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.\\n\\n[3] Guo, H., Lu, C., Bao, F., Pang, T., Yan, S., Du, C., |\\\\& Li, C. (2024). Gaussian mixture solvers for diffusion models. Advances in Neural Information Processing Systems, 36.\\n\\n[4] H\\u00e4rdle, W., Werwatz, A., M\\u00fcller, M., Sperlich, S., H\\u00e4rdle, W., Werwatz, A., et al. \\\\& Sperlich, S. (2004). Nonparametric density estimation. Nonparametric and semiparametric models, 39-83.\\n\\n[5] Roger B. Nelsen (1999), \\\"An Introduction to Copulas\\\", Springer. ISBN 978-0-387-98623-4\\n\\n\\n[6] Dimitris G Tzikas, Aristidis C Likas, and Nikolaos P Galatsanos. (2008).The variational approximation for bayesian inference. IEEE Signal Processing Magazine, 25(6):131\\u2013146, \\n\\n[7] Miin-Shen Yang, Chien-Yo Lai, and Chih-Ying Lin. A robust em clustering algorithm for gaussian\\nmixture models. Pattern Recognition, 45(11):3950\\u20133961, 2012.\\n\\n[8] Xu, L., \\\\& Veeramachaneni, K. (2018). Synthesizing tabular data using generative adversarial networks. arXiv preprint arXiv:1811.11264.\\n\\n***[Question 2] Consistency with the Number of Gaussian Components***\\n\\nWe experimented with values of \\\\(K\\\\) in the range from 2 to 50, finding that \\\\(K=50\\\\) was sufficient to balance model complexity and computational feasibility. In many cases, even a small number of components (e.g., \\\\(K=2\\\\) or \\\\(K=5\\\\)) yielded competitive performance (see Table 9 in the submitted manuscript). Below, we include a hyperparameter sensitivity analysis conducted on the GIN backbone using the IMDB-BINARY dataset, where we observed consistent results.\\n\\n\\n\\n| | K=10 | K=20 | K=30 | K=40 | K=50 |\\n|-----|-------------|-------------|-------------|-------------|-------------|\\n| GIN | 71.1 (2.70) | 71.3 (2.28) | 71.3 (2.57) | 71.6 (2.72) | 71.7 (4.24) |\"}", "{\"title\": \"Response to Reviewer 25Li (1/2)\", \"comment\": \"We thank Reviewer 25Li for the feedback. In what follows, we address the raised questions and weaknesses\\npoint-by-point.\\n\\n***[Weaknesses 1] On the clarity of the paper***\\n\\nThank you for your feedback. We acknowledge that the clarity of Section 3 could be improved to help guide the reader more effectively through the discussion. We have added more details in the revised manuscript. We will further enhance the clarity and organization of this section in the camera-ready version of the manuscript. \\n\\n ***[Weaknesses 2] Pre-training of the model***\\n\\nTo clarify, our approach does not require pre-training of the model in the traditional sense. Instead, we split the training process of the GNN into two distinct parts. The GNN consists of a sequence of message-passing layers followed by a shallow neural network, referred to as the post-readout function. We have provided a detailed summary of our approach in Algorithm 1.\\n\\nIn our method, we first train the message-passing layers on the graph classification task. Then, we train the shallow post-readout network using the representations from both the original and augmented graphs. The second training phase, which involves the post-readout function, is very quick as it consists of a simple shallow MLP that can be trained in just a few seconds.\\n \\nAs a result, while the total training time is increased due to this two-step process, the additional time required for training the second part is minimal. Therefore, the overall training time is not significantly impacted by our approach. \\n\\n***[Weaknesses 2] Model architecture specificityl*** \\n\\nWe agree that the graph data augmentation method is dependent on the specific model architecture. The decision to perform augmentation at the level of graph representations, rather than directly on the graph inputs, is driven by Theorem 3.3. This theorem demonstrates that an effective data augmentation strategy should take into account the particular model architecture and its learned weights. Additionally, to evaluate a data augmentation strategy, it is necessary to train different GNN backbones separately on the augmented train set, e.g. evaluating a data augmentation approach on GCN and GIN requires training both GCN and GIN on the training set independently.\\n\\n***[Weaknesses 3] Edge insertion and Deature drop Baselines***\\n\\nThank you for suggesting additional baselines. We have already included baselines [1], which outperform these methods. Nevertheless, we will add the suggested baselines in the camera-ready version of the manuscript. While edge insertion is similar to the already-used DropEdge baseline, we will add it for more complete comparaison. However, feature drop cannot be applied across all datasets, as some lack node features, making this augmentation impractical in those cases.\\n\\n[1] Yoo, J., Shim, S., & Kang, U. (2022, April). Model-agnostic augmentation for accurate graph classification. In Proceedings of the ACM Web Conference 2022 (pp. 1281-1291).\\n\\n***[Weaknesses 4] Metric in Theorem 3.1***\\n\\nThe norm used to measure the distance between $\\\\mathcal{G}^\\\\lambda$ (the augmented graph) and $\\\\mathcal{G}$ (the original graph) can correspond to any norm defined in the space of graphs, e.g. the spectral and Frobenius norms. The specific choice of the norm should align with the Lipschitz constant $L_{\\\\text{Lip}}$. Importantly, the result of Theorem 3.1 remains unchanged regardless of the chosen norm because all norms are equivalent in the space of graphs. If $L_{\\\\text{Lip}}$ is taken to be the Lipschitz constant of the post-readout function only, then the norm corresponds to the difference between the graph-level embeddings produced by the readout function, i.e., $\\\\|h_{\\\\mathcal{G}^\\\\lambda} - h_{\\\\mathcal{G}}\\\\|$. \\n\\nTo address this, we explicitly clarify in the revised manuscript that the norm can be general norm that match the constant of $L_{\\\\text{Lip}}$. \\n\\n\\n***[Question 1] Parameter $\\\\lambda$ in the case of GMM***\\n\\nIn our approach, the augmented hidden representations $h_{\\\\mathcal{G}_{n}^{\\\\lambda}}$ corresponds to a sampled vector from the GMM distribution $\\\\mathcal{P}_c$ that was previously fit on the hidden representations $\\\\mathcal{H}_c$ of the graphs in the training set with the same class $c$.\\n\\nFormally, $h_{\\\\mathcal{G}_{n}^{\\\\lambda}}= A( \\\\{\\\\mathcal{H}_c\\\\}_n, \\\\lambda_c) = \\\\lambda_c$, where $\\\\lambda_c$ is sampled from the GMM distribution $\\\\mathcal{P}_c$.\"}", "{\"title\": \"Response to Reviewer YuEk (2/2)\", \"comment\": \"***[Question 5] Insights into GMM-GDA's Results with GIN and GCN***\\n\\nAs mentioned in the paper, the comparison between different data augmentation approaches varies when using GCN and GIN. This is further explained by Theorem 3.3, which leverages influence functions to highlight how augmentation strategies may behave differently with model architectures. .\\n\\nTherefore, the observation that GMM-GDA performs better with GIN than GCN in few cases, can be explained by the architectural differences between these models. GIN, with its stronger expressive power, closely approximates the Weisfeiler-Lehman graph isomorphism test, enabling it to better use the diversity of augmented graph representations generated by GMM-GDA. Despite these variations, our approach consistently outperforms the baselines in most cases\\n\\nAdditionally, our method is not only more effective in improving performance but also demonstrates superior time efficiency. Unlike many baselines, GMM-GDA generates augmented data with minimal computational cost.\\n \\n\\n***[Question 6] Configuration models based data augmentation***\\n\\nTo address this, we have added the detailed steps of the configuration models-based data augmentation in Appendix C. This should provide a clearer understanding of the methodology of this experiment.\"}", "{\"title\": \"Response to Reviewer qdD6\", \"comment\": \"*We thank Reviewer qdD6 for the feedback. In what follows, we address the raised questions and weaknesses point-by-point.*\\n\\n\\n\\n***[Weaknesses] Motivation behind GMM-GDA***\\n\\nIndeed, other generative techniques, such as generative models, could also be used to fit the embeddings of the training data. However, we chose to use GMMs for several important reasons, particularly the efficiency and effectiveness of GMMs in this context. \\n \\nGMMs are universal approximators, meaning they can effectively approximate any distribution, including the distribution of the graph embeddings. This property ensures that the augmented data is drawn from a distribution that closely aligns with the original data distribution, as shown in Theorem 3.1. While other generative methods could be used to fit the embeddings, GMMs have the advantage of being relatively simple and efficient in terms of computation (we can fit a GMM and generate new samples in very few seconds). Unlike more complex methods such as generative models, GMMs can achieve high-quality approximations with minimal computational overhead.\\n \\n\\n***[Question 1] The number of Gaussian distribution***\\n\\nTheoretically, increasing $K$ allows the GMM to better approximate the true distribution of graph embeddings, as a higher number of components provides more flexibility in capturing complex distributions. We experimented with $K$ values in the range of 2 to 50, with the maximum $K=50$ being sufficient to balance model complexity and computational feasibility. In some cases, even a small number of components (e.g., $K= 2$ or $K=5$) was sufficient to achieve competitive performance. Below, We include a hyperparameter sensitivity analysis on the GIN backbone and the dataset IMDB--BINARY and we noticed a consistency in the results.\\n\\n| | K=10 | K=20 | K=30 | K=40 | K=50 |\\n|-----|-------------|-------------|-------------|-------------|-------------|\\n| GIN | 71.1 (2.70) | 71.3 (2.28) | 71.3 (2.57) | 71.6 (2.72) | 71.7 (2.64) |\\n\\n\\n\\n***[Question 2] Typo***\\n\\nWe are grateful to the reviewer for spotting the typo. We made the necessary adaptations in the submitted paper.\\n\\n***[Question 3] Proof of Theorem 3.1***\\n\\nWe thank the reviewer for pointing the error in the proof. We updated the proof in the paper with the necessary changes.\\n\\n***[Question 4] Clarification on the definition of $\\\\hat{\\\\theta}_{aug}$***\\n\\nAs defined in line 754 (in the Appendix) and in the mathematical formalism of graph data augmentation (in Section 3.1), the weights $\\\\hat{\\\\theta}_{aug}$ correspond to \\n\\n$ \\\\text{argmin} \\\\frac{1}{N} \\\\sum_{n=1}^{N} \\\\mathbb{E}_{\\\\lambda \\\\sim \\\\mathcal{P} }\\\\left [ \\\\ell(\\\\mathcal{G}_n^\\\\lambda,\\\\theta) \\\\right ],$\\n\\nand its empirically approximated by $\\\\text{argmin} \\\\frac{1}{N\\\\times M} \\\\sum_{n=1}^N \\\\sum_{m=1}^M \\\\ell(\\\\mathcal{G}_n^{\\\\lambda} {}^{n,m}, \\\\theta).$ \\n\\nThe proof for this formulation still holds. We have provided a clearer definition of $\\\\hat{\\\\theta}_{aug}$ in Section 3 of the manuscript.\\n\\n***[Question 5] The Upperbound in Theorem 3.1***\\n\\nIt is true that if the maximum of the expectations $ \\\\max_{n \\\\in \\\\{1, \\\\ldots,N\\\\}} \\\\mathbb{E}_{\\\\lambda \\\\sim \\\\mathcal{P}} \\\\left [ \\\\left \\\\| \\\\mathcal{G}_n^\\\\lambda -\\\\mathcal{G}_n \\\\right \\\\| \\\\right ], \\\\text{ is non-zero, the Rademacher complexity of},$\\n\\n $\\\\ell_{aug}$ may not necessarily be smaller than the Rademacher complexity of $\\\\ell$. However, by minimizing the term $ \\\\mathbb{E}_{\\\\lambda \\\\sim \\\\mathcal{P}} \\\\left [ \\\\left \\\\| \\\\mathcal{G}_n^\\\\lambda -\\\\mathcal{G}_n \\\\right \\\\| \\\\right ],$\\n\\nwe are more likely to reduce the additional Rademacher complexity, thus increasing the chances of achieving a lower overall Rademacher complexity. Since we use Gaussian Mixture Models (GMMs) for augmenting the graph data, which are universal approximators, we ensure that the expectations $ \\\\max_{n \\\\in \\\\{1, \\\\ldots,N\\\\}} \\\\mathbb{E}_{\\\\lambda \\\\sim \\\\mathcal{P}} \\\\left [ \\\\left \\\\| \\\\mathcal{G}_n^\\\\lambda -\\\\mathcal{G}_n \\\\right \\\\| \\\\right ]$ approach zero.\"}", "{\"summary\": \"This paper proposed GMM-GDA, a graph data augmentation algorithm with better generalization abilities and faster training speed. GMM-GDA is presented based on a theoretical analysis relying a Rademacher complexity, which bounded the generalization error by the difference between the augmented data and original data. Furthermore, this paper verified the effectiveness of GMM-GDA from the perspective of influence functions and detailed experiments show the priority of the proposed algorithm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis paper analyzes the problem of GNN generalization capability, which is well-written and clearly clarified. The whole paper is easy to understand.\\n\\n2.\\tThis paper provides theoretical insights before presenting the algorithm. \\n\\n3.\\tThe experiments of this paper is closely related to the research goal proposed.\", \"weaknesses\": \"1.\\tThis paper did not explain the necessity of performing data augmentation by GMM. The authors should strengthen the explanation of the relationship between theory and the algorithm.\\n\\n2.\\tSome tables and figures do not seem to support the conclusions in the paper, which needs more explanation.\", \"questions\": \"1.\\tBased on theorem 3.1, the motivation of the proposed method is to guarantee the alignment of the augmented data and original data, so the authors apply GMM to fit the embeddings of the training data. But there lacks an explanation of why GMM is applied, it seems that a simple DNN (or other more complex generative data augmentation techniques) can also fit the embeddings. Please explain the advantages of GMM.\\n\\n2.\\tIt seems that only the post-readout function is trained by the combination of augmented data and original data (line 264). Why do not take several more iterations, and update the parameters of message passing layers? Please explain why only generate the embeddings of training data once but not re-generate them after updating the network. \\n\\n3.\\tFigure 2 shows the influence scores of the augmented embeddings on different datasets. But the authors did not analyze why in dataset DD their algorithm perform worse. This is an interesting phenomenon and worth a deeper analysis. \\n\\n4.\\tThe authors claim that GMM-GDA is efficient in the augmentation steps and training steps and provide results in table 6 (line 315). But table 6 did not show the efficiency of GMM-GDA since it still cost much augmentation time or training time. Please explain why such a conclusion can be drawn from table 6. \\n\\n5.\\tIn the result of table 1&2, it seems that GMM-GDA has a better performance in the setting of GIN compared with GCN. This phenomenon worth a deeper analysis. \\n\\n6.\\tThe authors claim that the configuration models (line 470) is part of an ablation study, but it is hard to understand. Please explain more clearly about the conclusion of this experiment.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 25Li (2/3)\", \"comment\": \"***[Question 2] GMM: Approximation and Convergence Challenges***\\n\\nIt is true that we could have used other methods to model the distribution of the training graph data, such as generative models or alternative techniques to fit the GMM. However, the reasons for choosing GMM with the EM algorithm are three-fold:\\n\\n 1.***Superior Performance:*** Despite its simplicity, GMM with the EM algorithm consistently outperforms baseline methods across most cases in our experiments. \\n \\n 2.***Approximation Capability:*** By increasing $K$, a GMM can approximate any smooth target density. This property is a well-established result in statistical learning theory, as discussed in works like Deep Learning by [2]. In our approach, we ensure that a sufficiently large number of Gaussian components $K$ is used to accurately model the distribution of graph embeddings (by grid searching over the number of components from the values $\\\\{2, 5, 10, 20, 30, 40, 50\\\\}$), allowing us to approximate the underlying data distribution effectively.\\n \\n 3.***Computational Efficiency:*** The EM algorithm is computationally efficient, and allows us to generate augmented data at a relatively small computational cost, as highlighted in Section 3.3 of the paper. The complexity to fit a GMM in $T$ iterations is $\\\\mathcal{O}(N \\\\cdot K\\\\cdot T\\\\cdot d^2)$ ($N$ is the number of points, $K$ is the number of Gaussians, and $d$ is the dimension of the features) [8]. Moreover, GMM is relatively easy to sample from, as highlighted in [3].\\n\\nTo further validate empirically the choice of using GMMs fitted with the EM algorithm, we have explored an alternative approach to fitting the GMM using Variational Bayesian Inference (VBI) when using GCNs, presented as an ablation study in Appendix D of the originally submitted manuscript. The results demonstrate comparable performance to the EM algorithm, further validating the robustness of our approach. \\n\\nIn reaction to your follow-up question, we have expanded our evaluation to include additional methods for modeling the distribution of the graph representations to provide additional comparison and motivate the use of GMMs with the EM algorithm. Specifically, our comparison now includes:\\n\\n -***GMM w/ Variational Bayesian Inference (VBI):*** We specifically compared the Expectation-Maximization (EM) algorithm, discussed in the main paper, with the Variational Bayesian (VB) estimation technique [6] for parameter estimation of each Gaussian Mixture Model (GMM). The objective of including this baseline is to explore alternative approaches for fitting GMMs to the graph representations. In addition to the experimental results of VBI originally presented in Appendix D for the GCN backbone, we have extended the comparison between VBI and the EM algorithm to the GIN architecture as well.\\n\\n -***Kernel Density Estimation (KDE) [4]:*** KDE is a Neighbor-Based Method and a non-parametric approach to estimating the probability density. KDE estimates the probability density function by placing a kernel function (e.g., Gaussian) at each data point. The sum of these kernels approximates the underlying distribution. Sampling can be done using techniques like Metropolis-Hastings. The purpose of using KDE as a baseline is to evaluate alternative distribution different from the Gaussian Mixture Model (GMM).\\n\\n -***Copula-Based Methods [5]:*** We model the dependence structure between variables using copulas, while marginal distributions are modeled separately. We sample from marginal distributions and then transform them using the copula.\\n\\n -***Generative Adversarial Network (GAN):*** GANs are powerful generative models that learn to approximate the data distribution through an adversarial process between two neural networks. To evaluate the performance of deep learning-based generative approaches for modeling graph representations, we included tGAN, a GAN architecture specifically designed for tabular data [7]. We particularly train tGAN on the graph representations and then sample new graph representation from the generator.\"}", "{\"title\": \"Response to Reviewer YuEk (1/2)\", \"comment\": \"*We thank Reviewer YuEk for the feedback. In what follows, we address the raised questions and weaknesses point-by-point.*\\n\\n***[Question 1 + Weaknesses 1] the Use of GMM Over Alternative Methods***\\n\\nIndeed, other generative techniques, such as generative models, could also be used to fit the embeddings of the training data. However, we chose to use GMMs for several important reasons, particularly the efficiency and effectiveness of GMMs in this context. \\n \\nGMMs are universal approximators, meaning they can effectively approximate any distribution, including the distribution of the graph embeddings. This property ensures that the augmented data is drawn from a distribution that closely aligns with the original data distribution, as shown in Theorem 3.1. While other generative methods could be used to fit the embeddings, GMMs have the advantage of being relatively simple and efficient in terms of computation (we can fit a GMM and generate new samples in very few secons). Unlike more complex methods such as generative models, GMMs can achieve high-quality approximations with minimal computational overhead.\\n \\n***[Question 2] Distribution invariance in GMM-GDA***\", \"the_decision_to_limit_retraining_to_the_post_readout_function_after_augmenting_the_dataset_is_motivated_by_several_considerations\": \"1/ Before the readout function, the GNN generates embeddings at the node level. Directly learning a distribution at the node level is more complex as the number of nodes varies across graphs. In contrast, the graph-level representations output are fixed-dimensional embeddings which make it easy to learn their distribution with a GMM and sample new graph representations from it. 2/ Augmenting the data increases the size of the training set by adding new graph representations, i.e. augmented graphs. However, the post-readout function is a relatively small component of the GNN, typically consisting of a linear layer followed by a softmax. As a result, training this component on the augmented dataset is computationally efficient. In contrast, retraining the message-passing layers, which involve multiple neighborhood aggregation steps and operate at the node level, would significantly increase computational training time due to the larger dataset size.\\n\\n***[Question 3] Analyzing the influence scores on DD Dataset***\\n\\nOur approach GMM-GDA, generally performs better than the baselines for the DD dataset across both GIN and GCN backbones, as demonstrated in Tables 1 and 2. Figure 2 highlights that GMM-GDA exhibits greater effectiveness, as reflected by positive influence scores, on GIN compared to GCN. This behavior is consistent not only with our method but also with the baselines, as most graph data augmentation strategies tend to enhance test accuracy more significantly for GIN than for GCN when applied on DD.\\n\\n***[Weaknesse 2 + Question 4] The augmentation time and training time***\\n\\nIn addition to outperforming the baselines on most datasets, our approach offers an advantage in terms of time complexity (cf. Table 6 in Appendix E). The training time of baseline models varies depending on the augmentation strategy used, specifically, whether it involves pairs of graphs or individual graphs. Even in cases where a graph augmentation has a low computational cost for some baselines, training can still be time-consuming as multiple augmented graphs are required to achieve satisfactory test accuracy. For instance, methods like DropEdge, DropNode, and SubMix, while computationally simple, require generating multiple augmented samples at each epoch, thereby increasing the overall training time. In contrast, GMM-GDA introduces a more efficient approach by generating only one augmented graph per training instance, which is reused across all epochs. This design ensures a balance between computational efficiency and augmentation effectiveness, reducing the overall training burden while maintaining strong performance. \\n\\nThe only baseline that is more time-efficient than our approach is GeoMix; however, our method consistently outperforms GeoMix across all settings, as shown in Tables 1 and 2.\"}", "{\"comment\": \"Thank you for acknowledging our clarification and for raising your score. We greatly appreciate your constructive feedback.\"}", "{\"title\": \"Response to Reviewer 25Li (1/3)\", \"comment\": \"We thank Reviewer 25Li very much for their follow-up questions. In what follows, we try to further clarify the remaining questions.\\n\\n***[Weakness 4] Definition of Graph Distance in Non-Euclidean Spaces***\\n\\nYes, the distance between two graphs can be based on the norm of the difference of their adjacency matrices, and the norm could be for example the Frobenius or spectral norm. As mentioned in our response, the inequality holds for both norms since all norms are equivalent in finite-dimensional spaces. Specifically, let us consider the graph space $(\\\\mathbb{G}, \\\\lVert \\\\cdot \\\\rVert_{\\\\mathbb{G}})$ and the feature space $(\\\\mathbb{X}, \\\\lVert \\\\cdot \\\\rVert_{\\\\mathbb{X}})$, where $\\\\lVert \\\\cdot \\\\rVert_{\\\\mathbb{G}}$ and $\\\\lVert \\\\cdot \\\\rVert_{\\\\mathbb{X}}$ denote the norms applied to the graph structure and features, respectively. Assuming a maximum number of nodes per graph, which is a realistic assumption for real-world data, the product space $\\\\mathbb{G} \\\\times \\\\mathbb{X}$ is a finite-dimensional real vector space, and all the norms are equivalent. Thus, the choice of norm does not affect the theorem, as long as the Lipschitz constant is adjusted accordingly.\\n \\n \\n When considering only structural changes, with fixed node features, the distance between two graphs $\\\\mathcal{G}^\\\\lambda,\\\\mathcal{G}$ is defined as:\\n $$\\\\lVert\\\\mathcal{G}^\\\\lambda - \\\\mathcal{G} \\\\rVert= \\\\lVert A - A^\\\\lambda \\\\rVert_{\\\\mathbb{G}}, \\\\qquad \\\\qquad (1)$$ where $A,A^\\\\lambda$ are respectively the adjacency matrix of $\\\\mathcal{G}^\\\\lambda,\\\\mathcal{G},$ and the norm $\\\\lVert\\\\cdot \\\\rVert_{\\\\mathbb{G}}$ can be the Frobenius or spectral norm. If both structural and feature changes are considered, the distance extends to:\\n $$\\\\lVert\\\\mathcal{G}^\\\\lambda - \\\\mathcal{G} \\\\rVert= \\\\alpha \\\\lVert A - A^\\\\lambda \\\\rVert_{\\\\mathbb{G}} +\\\\beta \\\\lVert X - X^\\\\lambda \\\\rVert_{\\\\mathbb{X}}, \\\\qquad \\\\qquad (2) $$ where $X^\\\\lambda, X$ are the node feature matrices of $\\\\mathcal{G}^\\\\lambda,\\\\mathcal{G}$ respectively, and $\\\\alpha, \\\\beta$ are hyperparameters controlling the contribution of structural and feature differences, respectively. \\n\\nIn most baselines graph augmentation techniques, such as for instance $\\\\mathcal{G}$-Mixup, SubMix, and DropNode, the alignment between nodes in the original graph $\\\\mathcal{G}$ and the augmented graph $\\\\mathcal{G}^\\\\lambda$ is known. However, in cases where the node alignment is unknown, we must take into account the node permutations. The distance between the two graphs is then defined as:\\n\\\\begin{equation*} \\n \\\\lVert\\\\mathcal{G}^\\\\lambda - \\\\mathcal{G} \\\\rVert= \\\\min_{P \\\\in \\\\Pi} \\\\left( \\\\alpha \\\\lVert A - P A^\\\\lambda P^T \\\\rVert_{\\\\mathbb{G}} + \\\\beta \\\\lVert X - P X^\\\\lambda \\\\rVert_{\\\\mathbb{X}} \\\\right), \\\\qquad \\\\qquad (3)\\n\\\\end{equation*}\\nwhere $\\\\Pi$ is the set of permutation matrices. The matrix $P$ corresponds to a permutation matrix used to order nodes from different graphs. By using Optimal Transport, we find the minimum distance over the set of permutation matrices, which corresponds to the optimal matching between nodes in the two graphs. This formulation represents the general case of graph distance, which has been used in the literature [1].\\n\\nImportantly, Theorem 3.1 applies to both Frobenius and spectral norms in the three scenarios (1), (2) and (3) that we lay out in our answer. Hence, we tried to state the theorem in full generality in our manuscript. But we appreciate that the current statement may be perceived to lack specificity. We have provided more details and insights on the norms for which Theorem 3.1 applies to in the revised version of the manuscript (in Section 3.1 Lines 229-35 and in Appendix G). \\n\\n[1] Abbahaddou, Y., Ennadir, S., Lutzeyer, J. F., Vazirgiannis, M., \\\\& Bostr\\u00f6m, H. (2024).\\\"Bounding the Expected Robustness of Graph Neural Networks Subject to Node Feature Attacks.\\\" In The Twelfth International Conference on Learning Representations.\"}", "{\"summary\": \"The authors propose a novel graph data augmentation based approach to tackle the graph OOD problem. To be specific, they first train a GNN model using the training data. Then, graphs within each class in the training data are fed to the GNN model and the output of the readout layer are treated as the embeddings of this class. After that, the authors propose to fit a Gaussian Mixture Model (GMM) on the embeddings for each class using the classical EM algorithm. Finally, the augmented embeddings are generated by sampling from the GMMs for each class, which are combined with the embeddings of training data and used for fine tuning the post-Readout function. The proposed framework enjoy a high computation efficiency since the post-Readout function contains only a linear layer and the (time) complexity of fitting GMMs is linear. The authors also provide some theoretic analysis. First, they analyze the excess risk of the graph augmentation approach and the result shows that minimizing the expected distance between original graphs and augmented ones could reduce the excess risk. Second, they use influence functions to quantify the affect of augmented data on model's performance on testing data. Experimental results show that the their proposed method has competitive performance against baselines and has a significant benefit on advantages in robustness against structure corruption and time complexity.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed technique is reasonable and efficient. The experimental results provided in the paper are sufficient to support the effectiveness of this technique. Considering the balance between effectiveness and computational efficiency of this technique, it has a potential application on real-world scenarios. Besides, the paper is well-written and easy to follow.\", \"weaknesses\": \"Although using a generative models to learn the graph representation distribution is reasonable, the motivation of adopting GMMs is still unclear. GMM is universal approximator of densities could be one of the reasons, yet how many component need to achieve a small approximation error is unknown. Besides, the theoretic results provided in this paper do not seem to explain why this method that generating augmentations in representation space is superior or comparable to previous methods that generating augmentations in data space, which could be a promising direction to be explored.\", \"questions\": \"Q1: The number of Gaussian distributions $K$ in GMMs is an important hyperparameter and has impact on the performance of GMM. How do you properly choose this hyperparameter in practices? Please describe the tuning process of $K$ or provide hyperparameter sensitivity analysis to show how performance varies with different values of $K$.\", \"q2\": \"In line 349, the subscripts $i,j$ of the notation $\\\\\\\\mathcal{L}^{aug}_{i,j}$ are ambiguous. I think the correct one should be $\\\\\\\\mathcal{L}^{aug} _ {n,m}$.\", \"q3\": \"The proof between line 703 to line 715 seems confusing. The first equality only holds when the right hand side also includes a expectation w.r.t. $\\\\lambda_{n,m}$. Indeed, I think the proof should be proceeded as\\n\\\\begin{equation}\\n\\\\left\\\\Vert \\\\\\\\frac{1}{N} \\\\\\\\sum_{n=1}^N \\\\\\\\mathbb{E}_{\\\\\\\\lambda \\\\sim \\\\\\\\mathcal{P}} [\\\\ell(\\\\\\\\mathcal{G}_n,\\\\theta) - \\\\ell(\\\\\\\\mathcal{G}'_n,\\\\theta) ] \\\\right\\\\Vert = \\\\\\\\left\\\\\\\\Vert \\\\\\\\frac{1}{N} \\\\\\\\mathbb{E} _ {\\\\\\\\lambda \\\\\\\\sim \\\\\\\\mathcal{P}} [\\\\ell(\\\\\\\\mathcal{G} _ k,\\\\theta) - \\\\ell(\\\\\\\\mathcal{G}' _ k,\\\\theta)] \\\\\\\\right\\\\\\\\Vert \\\\leq \\\\\\\\frac{1}{N} \\\\\\\\mathbb{E} _ {\\\\\\\\lambda \\\\\\\\sim \\\\\\\\mathcal{P}} [\\\\Vert \\\\ell(\\\\\\\\mathcal{G}_k,\\\\theta) - \\\\ell(\\\\\\\\mathcal{G}'_k,\\\\theta) \\\\Vert ] \\\\leq \\\\frac{1}{N}.\\n\\\\end{equation}\\nThe first equality is obtained by your claim that $\\\\\\\\mathcal{G} _ k = \\\\\\\\mathcal{G}' _ k$ for $k = 1,\\\\ldots, N$ and $k \\\\neq n$. The last inequality is obtained by $\\\\ell(\\\\cdot) \\\\in [0,1]$.\\nPlease clarify this issue or correct this part of proof following the above steps.\", \"q4\": \"In line 754, you claim that $\\\\\\\\hat{\\\\theta} _ {aug}$ is the optimal parameter of the loss $\\\\frac{1}{N} \\\\\\\\sum_{n=1}^N \\\\\\\\mathbb{E} _ {\\\\\\\\lambda \\\\sim \\\\\\\\mathcal{P}} [\\\\ell(\\\\\\\\mathcal{G}^{\\\\\\\\lambda} _ n,\\\\theta)]$, which is different from the definition of $\\\\\\\\hat{\\\\theta}_{aug}$ in line 195 where $\\\\\\\\hat{\\\\theta} _ {aug} = \\\\\\\\mathop{\\\\rm argmin} _ {\\\\theta} \\\\\\\\frac{1}{NM} \\\\\\\\sum _ {n=1}^N \\\\\\\\sum _ {m=1}^M \\\\ell(\\\\\\\\mathcal{G} _ n^{\\\\\\\\lambda _ {n,m}}, \\\\theta)$. This could make the inequality $v_3 \\\\leq 0$ do not hold. Please clarify and check the definition of $\\\\\\\\hat{\\\\theta} _ {aug}$. If the above issue do exist, you should consider revising your proof accordingly.\", \"q5\": \"In line 221-223, you claim that minimizing the term $\\\\\\\\mathbb{E} _ {\\\\\\\\mathcal{G} \\\\sim G} \\\\\\\\mathbb{E} _ {\\\\\\\\lambda \\\\sim \\\\\\\\mathcal{P}} [\\\\Vert \\\\\\\\mathcal{G}^{\\\\\\\\lambda} - \\\\\\\\mathcal{G} \\\\Vert ]$ can guarantee with a high probability to decrease both the Rademacher complexity and the generalization risk. And you also show that the Rademacher complexity term $\\\\\\\\mathcal{R}(\\\\ell _ {aug})$ is upper bounded by $\\\\\\\\mathop{\\\\rm max} _ {n=1,\\\\ldots,N} \\\\\\\\mathbb{E} _ {\\\\\\\\lambda \\\\sim \\\\\\\\mathcal{P}} [ \\\\Vert \\\\\\\\mathcal{G}^{\\\\\\\\lambda} _ n - \\\\\\\\mathcal{G} _ n \\\\Vert ]$, which is a empirical estimation of $\\\\\\\\mathbb{E} _ {\\\\\\\\mathcal{G} \\\\sim G} \\\\\\\\mathbb{E} _ {\\\\\\\\lambda \\\\sim \\\\\\\\mathcal{P}} [\\\\Vert \\\\\\\\mathcal{G}^{\\\\\\\\lambda} - \\\\\\\\mathcal{G} \\\\Vert ]$ w.r.t. $\\\\\\\\mathcal{G}$. Therefore, minimizing the term $\\\\\\\\mathbb{E} _ {\\\\\\\\mathcal{G} \\\\sim G} \\\\\\\\mathbb{E} _ {\\\\\\\\lambda \\\\sim \\\\\\\\mathcal{P}} [\\\\Vert \\\\\\\\mathcal{G}^{\\\\\\\\lambda} - \\\\\\\\mathcal{G} \\\\Vert ]$ may not guarantee to decrease the Rademacher complexity term $\\\\\\\\mathcal{R}(\\\\ell _ {aug})$. Please clarify this issue or modify your claim in line 221-223.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to authors' feedback\", \"comment\": \"Regarding W4:\\nThe setting is clear when the distance between two graphs is defined in the embedding space, when not how it is defined? is it based on the Frobenius or spectral norm of the difference of their adjacency matrices?\", \"regarding_q2\": \"Even if GMMs are universal approximators, it is not guaranteed that the samples will remain within the distribution. Firstly, the approximation capabilities depend on the number of Gaussian components used. Secondly, the EM algorithm can converge to incorrect solutions, there are even simple cases in which it gets stuck in a local minimum.\"}", "{\"metareview\": \"This paper proposes a data augmentation method to enhance GNN generalization. The proposed method applied Gaussian Mixture Models (GMMs) to the readout features, and augment data following the trained GMM. Then the classifier is trained on the augmented data.\\n\\nUnfortunately, the proposed method does not give a significant improvement of the performance. The proposed method just applies a general data augmentation method to the GNN setting and does not make use of any specific property of GNNs, which limits the novelty of the proposal. The reason why the proposed method does not give significant performance improvement would be that the method requires to fit GMMs (generative model) that is harder than just solving classification problem. Thus, this data augmentation approach does not necessarily perform well in not only GNNs but also general classification problems. \\n\\nFor these reasons, I don't recommend acceptance of this paper.\", \"additional_comments_on_reviewer_discussion\": \"The drawbacks of this paper was not completely resolved through discussions. Indeed, the performance improvement looks just a chance rate.\"}", "{\"summary\": \"The paper discusses data augmentation for graphs. The concrete proposal is to use a Gaussian mixture model. The justification for this proposed approach is that Gaussian mixture models are universal density estimators.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"Data augmentation for graphs is a topic that warrants investigation. There is little work on the subject.\\nNumerical evaluations are comprehensive\", \"weaknesses\": \"The proposed data augmentation method is not specific to graphs. It could apply to any data type. No arguments are given as to whether this is suitable way of augmenting graph datasets.\\n\\nNumerical evaluations are comprehensive but underwhelming. Improvements are marginal relative to training without data augmentation. All but 1 improvement in Tables 1 and 2 are well within one standard deviation and can be explained by random chance.\", \"questions\": \"I do not understand Theorem 1. Please expand explanation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer 7B8K\", \"comment\": \"*We thank Reviewer 7B8Ku for the feedback. In what follows, we address the raised questions and weaknesses point-by-point.*\\n\\n***[Weaknesses 1] Graph-Specificity of our Approach***\\n\\nWe acknowledge that our proposed data augmentation method is not inherently specific to graphs and could indeed be applied to other data types. However, we view this as a significant strength. Working on the field of Graph Machine Learning, we chose to apply this method to Graph Neural Networks (GNNs) because of its potential to enhance the performance of graph-based models. Moreover, several established graph augmentation methods, such as SubMix, GeoMix, and GraphMix, are adaptations of general data augmentation strategies originally designed for other domains, such as Mixup. These approaches have proven to be effective in graph contexts despite their general origins.\\n\\nNotably, the choice of augmenting the train dataset at the level of graph representation, and not at the level of graph input, is motivated by Theorem 3.3, which shows that a data augmentation strategy should depend on the specific model architecture and its weights. \\n\\nAnother key motivation is that, unlike many existing baselines which focus on augmenting either the graph structure (e.g., perturbing edges) or the node features separately, our method operates directly on graph representations. This enables us to simultaneously augment both structural and feature information, leading to improved generalization in GNNs.\\n\\n***[Weaknesses 2] Improvements of our Approach***\\n\\nWe trained all baseline models using the same train/validation/test splits, GNN architectures, and hyperparameters to ensure a fair comparison. It is worth noting that the baselines also exhibit high standard deviations, which is a common characteristic in graph classification tasks. Unlike node classification, graph classification is known to have larger variance in performance metrics [1].\\n \\nIn addition to outperforming the baselines on most datasets, our approach offers an advantage in terms of time complexity (cf. Table 6 in Appendix E). The training time of baseline models varies depending on the augmentation strategy used, specifically, whether it involves pairs of graphs or individual graphs. Even in cases where a graph augmentation has a low computational cost for some baselines, training can still be time-consuming as multiple augmented graphs are required to achieve satisfactory test accuracy. For instance, methods like DropEdge, DropNode, and SubMix, while computationally simple, require generating multiple augmented samples at each epoch, thereby increasing the overall training time. In contrast, GMM-GDA introduces a more efficient approach by generating only one augmented graph per training instance, which is reused across all epochs. This design ensures a balance between computational efficiency and augmentation effectiveness, reducing the overall training burden while maintaining strong performance. \\n\\n[1] Bianchi, F. M., Lachi, V. (2024). The expressive power of pooling in graph neural networks. Advances in neural information processing systems, 36.\\n\\n***[Question 1] Explanation of Theorem 3.1.***\\n\\nIn Theorem 3.1, we established a mathematical framework to connect graph data augmentation with its impact on the generalization of GNNs. To evaluate GNN generalization, we employed the concept of Rademacher Complexity to derive a regret bound on the generalization gap. Rademacher Complexity, a fundamental concept in statistical learning theory, quantifies a model's capacity to generalize to unseen data by assessing its ability to fit random labels or noise. Better generalization is associated with a lower Rademacher Complexity. Theorem 3.1 demonstrates that data augmentation can reduce Rademacher Complexity, i.e., $\\\\mathcal{R}(\\\\ell_{aug}) \\\\leq \\\\mathcal{R}(\\\\ell)$. Specifically, the augmented graph should achieve a lower upper bound value for the following expression: $\\\\max_{n \\\\in \\\\{1, \\\\ldots,N\\\\}} \\\\mathbb{E}_{\\\\lambda \\\\sim \\\\mathcal{P}} \\\\left [ \\\\left \\\\| \\\\mathcal{G}_n^\\\\lambda -\\\\mathcal{G}_n \\\\right \\\\| \\\\right ]$ which implies that the augmented data must follow the same distribution as the original graph representation. This condition is ensured by Gaussian Mixture Models (GMMs), which are universal approximators.\\n\\n\\n*Given the brevity of your review, we hope that our additional explanations have helped you understand our work more deeply and give you the opportunity to reevaluate your assessment of our work.*\"}", "{\"comment\": \"Dear Reviewer 25Li,\\n\\nWe want to thank you again for the follow-up questions that allowed us to extend our ablation studies and to clarify our paper. As the discussion period ends soon, we would greatly appreciate a response if we have satisfactorily addressed your questions and concerns. If you have any further questions/concerns, please let us know and we will be happy to provide further answers.\\n\\nBest regards,\\nThe Authors\"}" ] }
0TSAIUCwpp
Diffusion-based Extreme Image Compression with Compressed Feature Initialization
[ "Zhiyuan Li", "Yanhui Zhou", "Hao Wei", "Chenyang Ge", "Ajmal Saeed Mian" ]
Diffusion-based extreme image compression methods have achieved impressive performance at extremely low bitrates. However, constrained by the iterative denoising process that starts from pure noise, these methods are limited in both fidelity and efficiency. To address these two issues, we present $\textbf{R}$elay $\textbf{R}$esidual $\textbf{D}$iffusion $\textbf{E}$xtreme $\textbf{I}$mage $\textbf{C}$ompression ($\textbf{RDEIC}$), which leverages compressed feature initialization and residual diffusion. Specifically, we first use the compressed latent features of the image with added noise, instead of pure noise, as the starting point to eliminate the unnecessary initial stages of the denoising process. Second, we design a novel relay residual diffusion that reconstructs the raw image by iteratively removing the added noise and the residual between the compressed and target latent features. Notably, our relay residual diffusion network seamlessly integrates pre-trained stable diffusion to leverage its robust generative capability for high-quality reconstruction. Third, we propose a fixed-step fine-tuning strategy to eliminate the discrepancy between the training and inference phases, further improving the reconstruction quality. Extensive experiments demonstrate that the proposed RDEIC achieves state-of-the-art visual quality and outperforms existing diffusion-based extreme image compression methods in both fidelity and efficiency. The source code and pre-trained models will be released.
[ "extreme image compression", "diffusion models", "compressed feature initialization", "residual diffusion" ]
https://openreview.net/pdf?id=0TSAIUCwpp
https://openreview.net/forum?id=0TSAIUCwpp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zCAagIqWZf", "l2HiIm5PkK", "jSpHjraxNY", "hWOskGkXTp", "eFEkcuYPoe", "dlnBLkCZdS", "aFGSvMgLa3", "YlEtwCg2s5", "XowLe9u6vH", "X01ATnXUkj", "VHx33XrkuJ", "SSf0W25ZZI", "Qu62Ph0aoY", "Q7sFTAdOG8", "L8vbDFja98", "KKZFfdxkaX", "IeUMGLMlTU", "DumDEUmbxj", "DiTTVxLDHd", "CY51q7V6KH", "764vOr4oKQ", "6Xa0hX2jGA", "2DGW6Nhk5b", "0P97feyX6o" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment" ], "note_created": [ 1732681963873, 1732679671736, 1732672064122, 1730560632476, 1732671960893, 1732757919189, 1732463865897, 1732693574031, 1730607104618, 1732672253871, 1732464226503, 1732461070104, 1732436303165, 1732459840820, 1732457486945, 1730381266200, 1732673048741, 1730101613961, 1732549382809, 1732450585473, 1732455480902, 1737791571501, 1732543553739, 1732449650325 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2931/Reviewer_PdRA" ], [ "ICLR.cc/2025/Conference/Submission2931/Reviewer_Ejkn" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Reviewer_Ejkn" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Reviewer_1h9J" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Reviewer_PdRA" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Reviewer_X8Q6" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ], [ "ICLR.cc/2025/Conference/Submission2931/Reviewer_1h9J" ], [ "ICLR.cc/2025/Conference/Submission2931/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for your response. The author's response has answered my doubts.\"}", "{\"comment\": \"Thanks for your response.\\n\\nIt has resolved my confusion. However, there are still the following limitations.\\n\\nFirstly, compared to PASD and SeeSR, the innovation of modifying the start point is limited. Besides, the start points can also be adjusted, such as in AddNet [1], CCSR [2]. \\n\\nSecondly, although ResShift trains a diffusion model from scratch, its core idea focuses on residual modeling. However, the idea of residual modeling in this paper is similar to that of ResShift. This paper just transfers residual modeling to stable diffusion. The innovation may not meet the requirements for ICLR.\\n\\n[1] https://arxiv.org/pdf/2404.01717\\n[2] https://arxiv.org/pdf/2401.00877v1\"}", "{\"title\": \"Look forward to your response\", \"comment\": \"Dear Reviewer X8Q6,\\n\\nWe hope you have had the opportunity to review our responses and clarifications. As the discussion period is nearing its conclusion, we would greatly appreciate it if you could confirm whether our updates have adequately addressed your concerns.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper proposes Relay Residual Diffusion Extreme Image Compression method to achieve fidelity and efficiency. In particular, this paper use latent feature with added noise as the start point and employ residual diffusion to improve the fidelity. And this paper proposes a fixed-step fine-tuning strategy to reduce the number of steps.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper is clear in describing its contributions and methodology.\\nThe experimental arrangement is relatively reasonable, and the ablation study can prove the effectiveness of the strategies proposed by the author.\", \"weaknesses\": \"The novelty is limited. Firstly, adding noise to the latent features is a common operation, which is used in many papers [1,2]. Secondly, the proposed residual diffusion is similar to ResShift[3]. The author should fully research the diffusion-based methods on low-level vision tasks published in the past two years and analyze the differences between them better.\\n[1] SeeSR: Towards Semantics-Aware Real-World Image Super-Resolution CVPR23\\n[2] Pixel-Aware Stable Diffusion for Realistic Image Super-Resolution and Personalized Stylization ECCV24\\n[3] ResShift: Efficient Diffusion Model for Image Super-resolution by Residual Shifting NIPS23\\n\\nThe motivation is not clear. In the third paragraph of Sec. 1, the author analysis the limitations of diffusion-based methods. The first limitation is \\u2018these methods rely on an iterative denoising process to reconstruct raw images from pure noise which is inefficient for inference\\u2019. The second limitation is \\u2018initiating the denoising process from pure noise introduces significant randomness, compromising the fidelity of the reconstructions.\\u2019 In addition to adding noise to the latent features, the author also employs a residual diffusion process and employ pre-trained stable diffusion to address these limitations. It is not clear remains unclear how residual diffusion and pre-trained stable diffusion can resolve the randomness caused by pure noise and improve the fidelity of the reconstructions.\\n\\nThere are two doubts about controllable detail generation. Firstly, the pre-trained stable diffusion is used to obtain low-frequency information. Since the pre-trained stable diffusion has not seen the inputs in the authors' task, why can it produce the expected results? Secondly, why did the authors choose to use pre-trained stable diffusion instead of directly using CFG?\", \"questions\": \"Figure 4 shows that the authors' method does not achieve the best results on metrics such as PSNR, MS-SSIM, and SSIM, and there is a significant gap compared to other methods. Noting that PSNR MS-SSIM, and SSIM are metrics used to evaluate fidelity. This is inconsistent with the authors' motivation. In the abstract, the authors mention that the proposed method aims to address the limitations of fidelity and efficiency.\\n\\nThe authors mention in their experiments that they trained five models, each corresponding to different \\u03bb_r values. However, in the comparative experiments (e,g, Tab.1, Tab. 2, Tab. 3, Fig. 4, Fig.5, Fig. 6, Fig.7, etc.), the authors do not specify which model's results were used. In addition, the author did not mention the guidance scale values used for these experimental results.\\n\\nIn Tab. 3, the author uses 2/5 in the DS column, so it is unclear whether the performance in the table refers to the 2-step model or the 5-step model. In addition, just using distortion of BD-rate or perception of BD-rate is not clear. The distortion includes PSNR, and SSIM, etc. and perception includes DISTS, FID, and LPIPS, etc. It is not clear which metrics distortion and perception represent respectively. The author should provide detailed results for metrics such as PSNR, SSIM, and LPIPS. Meanwhile, in the paper comparing the methods (PerCo, MS-ILLM), they did not use the bd-rate metric. Therefore, it is a good choice that the author just employs the values of PSNR, SSIM or LPIPS to demonstrate the performance and not use BD-rate.\\n\\nIn Tab. 2, the BD-rate of RDEIC with 2 DS is 0, while the BD-rate of RDEIC with 2 DS is also 0. So, which is the anchor in Tab. 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Look forward to your response\", \"comment\": \"Dear Reviewer Ejkn,\\n\\nWe hope you have had the opportunity to review our responses and clarifications. We would be grateful if you could confirm whether our updates have fully addressed your concerns. Should you have any further comments or questions, we would be more than happy to address them at your convenience.\\n\\nThank you once again for your valuable time and thoughtful feedback. We genuinely appreciate your efforts in reviewing our work.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Thanks to Reviewer PdRA\", \"comment\": \"We sincerely appreciate your response and the valuable contribution your feedback has made to improving our work! If all your queries have been adequately addressed, we kindly ask you to consider raising your rating. However, if you still have any remaining doubts or concerns, we would be more than happy to engage in further discussion to clarify them.\"}", "{\"title\": \"Response to Reviewer X8Q6 (Part III)\", \"comment\": \"**Response to Question 1**:\\n\\n1. **Definition of \\u201cExtremely Low Bitrates\\u201d**: In this work, we define \\u201cextremely low bitrates\\u201d as scenarios where the bits-per-pixel (bpp) falls below 0.1, aligning with the definition used in DiffEIC [1].\\n\\n2. **Adjusting Thresholds Based on Content Complexity**: While it is theoretically possible to adjust the \\u201cextremely low\\u201d threshold based on content complexity, we argue that such adjustments are unnecessary. In academic evaluations, methods are assessed based on their average performance across diverse datasets, ensuring fairness and generality, regardless of individual image characteristics. Similarly, in practical applications, the primary focus is on meeting bandwidth or storage constraints, which are typically independent of image content.\\n\\n3. **Broader Applications in Bandwidth-Constrained Scenarios**: The potential applications include satellite communication, underwater communication, and the transmission of non-critical internet images.\\n\\n4. **Diffusion Models at Medium and High Bitrates**: Diffusion models retain significant value even at medium and high bitrates. Compared to GANs, diffusion models offer advantages such as greater training stability and superior generative performance. Methods like CDC [2] demonstrate the applicability of diffusion-based approaches in medium- and high-bitrate scenarios by leveraging their generative capabilities to produce high-quality, detail-rich reconstructions.\\n\\n---\\n\\n**Response to Question 2**:\\n\\n1. **Learning and Training Process**: Codebook provides a discrete quantization of the latent space, enabling efficient encoding of side information $l_p$ . The codebook loss is: $L_{cb} = \\\\Vert sg(l_p) - \\\\hat{l}_p \\\\Vert_2^2 + \\\\beta \\\\Vert sg(\\\\hat{l}_p) - l_p \\\\Vert_2^2$, where $sg(\\\\cdot)$ is the stop-gradient operator and $\\\\beta = 0.25$ in our experiments. $\\\\Vert sg(\\\\hat{l}_p) - l_p \\\\Vert2^2$ ensures $l_p$ close to the nearest codeword $\\\\hat{l}_p$ and $\\\\Vert sg(l_p) - \\\\hat{l}_p \\\\Vert_2^2$ Encourages $\\\\hat{l}_p$ moves closer to $l_p$. During training, the embeddings of codebook are dynamically updated based on the gradients from the codebook loss. In our implementation, we directly utilize CVQ-VAE [3]; further details can be found in the corresponding paper.\\n\\n2. **Initialization**: The codebook is initialized uniformly as `self.embedding.weight.data.uniform_(-1.0 / n_e, 1.0 / n_e)`, where $n_e = 16384$ in our experiments.\\n\\n3. **Interaction with $l_p$**: Each element of $l_p$ is replaced with its closest codeword $c_k$: $\\\\hat{l}_q^{ij} = argmin _{c_k \\\\in C} \\\\Vert l_p^{ij} - c_k \\\\Vert_2^2$, where C denotes the codebook, $I$ and $j$ denote positions.\\n\\n---\\n\\n**Response to Question 3**:\\n\\nIn compression tasks, placing diffusion in the encoding stage or within the hyperprior is not an optimal choice, as these stages cannot fully utilize the generative capabilities of diffusion models.\\n\\nThe encoding stage primarily focuses on compactly representing the input data, and placing diffusion at this stage would shift computational complexity to the encoding end without contributing to the reconstruction process. Similarly, although placing diffusion within the hyperprior could theoretically reduce computational complexity because of the smaller feature resolution, this approach would mainly refine side information and have a relatively minor impact on the overall reconstruction quality.\\n\\n---\\n\\n**Reference**\\n\\n[1] Zhiyuan Li, Yanhui Zhou, Hao Wei, Chenyang Ge, and Jingwen Jiang. Towards extreme imagecompression with latent feature guidance and diffusion prior. IEEE Transactions on Circuits and Systems for Video Technology, 2024.\\n\\n[2] Ruihan Yang and Stephan Mandt. Lossy image compression with conditional diffusion models. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.\\n\\n[3] Chuanxia Zheng and Andrea Vedaldi. Online clustered codebook. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 22798\\u201322807, 2023.\"}", "{\"title\": \"Response to Reviewer Ejkn regarding innovation\", \"comment\": \"Thank you for your timely response and thoughtful feedback. We are glad to have clarified some of your concerns and would like to address the remaining points in detail.\\n\\n---\\n\\n**1. Regarding Starting Point**\\n\\nFirst, after reviewing their papers and codebases, we found that the initial points in both methods are not adjustable as you mentioned. Specifically, in CCSR, the initial point is set to pure noise, as defined in its code:\\n\\n `x_T = torch.randn(shape, device=model.device, dtype=torch.float32)`. \\n\\nAddNet, on the other hand, primarily focuses on controlling conditions during the denoising phase. Its initial point is similar to PASD and SeeSR, as reflected in its code: \\n\\n`parser.add_argument(\\\"--start_point\\\", type=str, choices=['lr', 'noise'], default='lr') # LR Embedding Strategy, choose 'lr latent + 999 steps noise' as diffusion start point.`\\n\\nSecond, these previous methods only predict noise and do not consider the residual between degraded and target features. In contrast, our approach directly uses compressed features with slight noise as the starting point during training, enabling the network to simultaneously remove both noise and residual. This ensures consistency between the training and testing phases, improving both efficiency and reconstruction quality. We believe this makes our work novel compared to these prior methods.\\n\\n---\\n\\n**2. Regarding similarities to ResShift**\\n\\nWe acknowledge that our work shares a conceptual similarity with ResShift in utilizing residual modeling. However, we extend this idea by incorporating the powerful generative capability of pre-trained text-to-image diffusion models like Stable Diffusion into the residual diffusion framework\\u2014an area that has not been explored in previous works.\\n\\nIt is important to note that Stable Diffusion is trained for noise prediction and cannot directly handle residuals. To leverage Stable Diffusion's robust generative capability while enabling the network to process residuals, we designed a new diffusion equation that satisfies the following conditions:\\n\\n- The diffusion equation retains the same structure as Stable Diffusion's equation, allowing seamless integration and utilization of its generative capability.\\n\\n- It can progressively add residuals in a manner analogous to noise addition.\\n\\n- $\\\\boldsymbol{z}_N = \\\\sqrt{\\\\bar{\\\\alpha}_N} \\\\boldsymbol{z}_c + \\\\sqrt{1-\\\\bar{\\\\alpha}_N}\\\\epsilon_N$, where $\\\\boldsymbol{z}_c$ is the compressed features.\\n\\nSpecifically, we derived the following equation from Stable Diffusion\\u2019s diffusion framework:\\n\\n- $\\\\boldsymbol{z}_{n} = \\\\sqrt{\\\\bar{\\\\alpha}_n} (\\\\boldsymbol{z}_0 + \\\\lambda \\\\frac{\\\\sqrt{1-\\\\bar{\\\\alpha}_n}}{\\\\sqrt{\\\\bar{\\\\alpha}_n}} \\\\boldsymbol{e}) + \\\\sqrt{1-\\\\bar{\\\\alpha}_n} \\\\epsilon_n = \\\\sqrt{\\\\bar{\\\\alpha}_n} \\\\boldsymbol{z}_0 + \\\\sqrt{1-\\\\bar{\\\\alpha}_n} (\\\\lambda \\\\boldsymbol{e} + \\\\epsilon_n)$, where $\\\\lambda=\\\\frac{\\\\sqrt{\\\\bar{\\\\alpha}_N}}{\\\\sqrt{1-\\\\bar{\\\\alpha}_N}}$.\\n\\nThis innovation allows the network to combine residual modeling with the robust generative capabilities of Stable Diffusion. Therefore, summarizing our work as \\u201cjust transferring residual modeling to Stable Diffusion\\u201d does not fully capture the novelty and depth of our contributions.\\n\\nFurthermore, while our implementation leverages Stable Diffusion, the underlying methodology and derivation are general and can be readily extended to other text-to-image diffusion models.\\n\\n---\\n\\n**Our Perspective on Innovation**\\n\\nFinally, we believe that discussing innovation without considering the specific task is inappropriate. In this work, our goal is to introduce residual diffusion to overcome the efficiency and fidelity limitations of existing diffusion-based extreme image compression methods. During this process, we addressed several critical challenges, including:\\n\\n- Integrating residual diffusion with pre-trained text-to-image diffusion models.\\n\\n- Achieving end-to-end training of the compression module within a residual diffusion framework.\\n\\n- Resolving inconsistencies between timestep-independent training and inference.\\n\\n- Enabling diverse reconstruction outputs without compromising efficiency.\\n\\nExperimental results demonstrate that our method achieves significant improvements in both reconstruction performance and computational efficiency. We believe that, in the field of image compression, our work is novel and makes meaningful contributions.\\n\\n---\\n\\nWe hope this response addresses your concerns and clarifies the novelty and contributions of our work. Thank you again for your valuable feedback.\"}", "{\"summary\": \"This paper introduces RDEIC, a novel diffusion model for extreme image compression that accelerates the denoising process through compression feature initialization. It draws on techniques from several papers, e.g., the codec framework scheme in GLC[1], and the control net in deffeic[2].The results provide evidence that the proposed scheme achieves SOTA performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper is well-written and easy to follow, and the experiments are detailed and comprehensive.\\n2.\\tThis paper reduces computational complexity by reducing the denoising step, which is valuable for resource-constrained environments.\", \"weaknesses\": \"1.\\tThe paper has limited innovation. Its pipeline looks like a simple combination of GLC[1] and deffeic[2], utilizing the codec framework of GLC[1] and the generative model of deffeic[2]. However, the paper does not compare performance with GLC[1].\\n2.\\tThis paper adopts a better performance generative model RDD instead of stable diffusion, and with the stronger generative ability of RDD, better performance is obtained. So if DiffEIC-50 also adopts RRD, will it achieve better performance?\\n3.\\tThe conclusions of some visualization experiments are not rigorous enough. For example, in Fig. 1, despite the obvious subjective quality improvement of RDEIC, its bit rate is 7.5% higher than deffeic[2]. A similar problem can be observed in Figure 5.\\n4.\\tSome analysis needs to be included to show why RDEIC is worse than MS-ILLM on the NIQE metric.\\n\\n[1] Jia Z, Li J, Li B, et al. Generative Latent Coding for Ultra-Low Bitrate Image Compression. CVPR 2024.\\n\\n[2] Zhiyuan Li, Yanhui Zhou, Hao Wei, Chenyang Ge, and Jingwen Jiang. Towards extreme imagecompression with latent feature guidance and diffusion prior. IEEE Transactions on Circuits and Systems for Video Technology, 2024.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Look forward to your response\", \"comment\": \"Dear Reviewer PdRA,\\n\\nThank you very much for your positive feedback on our work! We hope you have had the chance to review our responses and clarifications. As the discussion period is nearing its conclusion, we would greatly appreciate it if you could confirm whether our updates have fully addressed your concerns.\\n\\nThank you again for your time and thoughtful review.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer X8Q6 (Part IV)\", \"comment\": \"**Response to Question 4**:\\n\\nThank you for your insightful comment. Following your suggestion, we design two variants for comparison:\\n\\n1. **W/o denoising process**: In this variant, the compression module is trained jointly with the noise estimator, but the denoising process is bypassed during the inference phase.\\n\\n2. **W/o diffusion mechanism**: In this variant, the compression module is trained independently, completely excluding the influence of the diffusion mechanism.\\n\\nAs shown in Fig. 10 of the revised manuscript, bypassing the denoising process results in significant degradation, particularly in perceptual quality. This highlights the critical role of the diffusion mechanism in enhancing perceptual quality during reconstruction. As shown in Fig. 11 of the revised manuscript, the diffusion mechanism effectively adds realistic and visually pleasing details.\\n\\nAdditionally, Fig. 12(a) visualizes an example of bit allocation. The model trained jointly with the noise estimator allocates bits more efficiently, assigning fewer bits to flat regions (e.g., the sky in the image). Fig. 12(b) shows the cross-correlation between each spatial pixel in $(\\\\boldsymbol{y} - \\\\boldsymbol{\\\\mu}) / \\\\boldsymbol{\\\\sigma}$ and its surrounding positions. The model trained jointly with the noise estimator exhibits lower cross-correlation, indicating reduced redundancy and more compact feature representations. These results indicate that the diffusion mechanism provides additional guidance for optimizing the compression module during training, enabling it to learn more efficient and compact feature representations.\\n\\nWe have included a detailed discussion in Appendix C of the revised manuscript.\"}", "{\"title\": \"Response to Reviewer X8Q6 (Part II)\", \"comment\": \"**Response to Weakness 5**:\\n\\nThanks for pointing out this issue. For the two models corresponding to larger bpp values, we use 2 denoising steps, while for the remaining three models, we use 5 denoising steps. As shown in Fig. 4 of the revised manuscript, the performance points of these five models with different denoising steps collectively form the performance curve, serving as the anchor for calculating the BD-rate. To avoid unnecessary confusion, we have removed this column from Table 1 of the revised manuscript.\\n\\n---\\n\\n**Response to Weakness 6**:\\n\\n1. We have provided the detailed derivation from Eq. (2) to Eq. (4) in Appendix A.\\n\\n2. We acknowledge that it is inaccurate to interpret $\\\\epsilon_{sd}(z_n, c)$ as the low-frequency component of the noise itself. To enhance clarity and rigor, we have revised the manuscript to refer to $\\\\epsilon_{sd}(z_n, c)$ as the \\u201clow-frequency control component.\\u201d\\n\\n3. We have clarified in Sec. 3.1 that $\\\\boldsymbol{l}_p$ represents the side information used in the image compression module, while $\\\\hat{\\\\boldsymbol{l}}_p$ refers to the vector-quantized result of $\\\\boldsymbol{l}_p$, i.e., $\\\\hat{\\\\boldsymbol{l}}_p$ is the mapping of $\\\\boldsymbol{l}_p$ to its closest codebook entry.\\n\\n---\\n\\n**Response to Weakness 7**:\\n\\nThanks a lot for pointing out these two issues, we have corrected them in the revised manuscript.\"}", "{\"title\": \"Clarification of our RDEIC\", \"comment\": \"We thank all reviewers for their constructive feedback and the time they took to make the reviews. Before addressing your questions, we would like to clarify the focus of our work. In this work, **we aim to address fidelity and efficiency challenges commonly observed in existing diffusion-based extreme image compression methods (e.g., DiffEIC [1] and PerCo [2]) by improving the diffusion process and training strategy, rather than proposing novel network architectures.**\\n\\n---\\n\\n**Motivation**\\n\\nAs described in the introduction, we observed two major issues in existing diffusion-based extreme image compression methods:\\n\\n- **Inefficient denoising process**: Existing diffusion-based extreme image compression methods follow the denoising process of DDPM [3], starting from pure noise to iteratively reconstruct the image. This requires a large number of denoising steps (e.g., 50 steps in DiffEIC) to achieve optimal reconstruction, making the process highly inefficient. Moreover, using random noise as the starting point introduces significant randomness, which compromises reconstruction fidelity.\\n\\n- **Discrepancy between training and inference phases**: These methods train each time-step independently. For image compression, the lack of coordination among time-steps can result in error accumulation and suboptimal reconstruction.\\n\\n---\\n\\n**Methodology**\\n\\nTo address above issues, we propose the following solutions in this paper:\\n\\n- **Compressed Feature Initialization**: Instead of starting from pure noise, we use the degraded feature $\\\\boldsymbol{z}_c$ and slight noise $\\\\epsilon_N$ to form the starting point $\\\\boldsymbol{z}_N = \\\\sqrt{\\\\bar{\\\\alpha}_N} \\\\boldsymbol{z}_c + \\\\sqrt{1-\\\\bar{\\\\alpha}_N}\\\\epsilon_N$. As shown in Fig. 2(b), since $N$ (300) is much smaller than $T$ (1000), $\\\\boldsymbol{z}_N$ retains most of the information from the compressed feature $\\\\boldsymbol{z}_c$, providing a solid foundation for subsequent detail generation.\\n\\n- **Relay Residual Diffusion (RRD)**: We also propose a novel relay residual diffusion to remove both added noise $\\\\epsilon_N$ and the residual $\\\\boldsymbol{e}$ ($\\\\boldsymbol{e}=\\\\boldsymbol{z}_c-\\\\boldsymbol{z}_0$) contained in $\\\\boldsymbol{z}_N$. The diffusion equation of our relay residual diffusion is directly derived from Stable Diffusion's diffusion equation, enabling seamless integration of pre-trained Stable Diffusion to leverage its robust generative capability for high perceptual reconstruction.**To the best of our knowledge, we are the first to integrate Stable Diffusion into a residual diffusion framework.**\\n\\n- **Fixed-Step Fine-Tuning (FSFT)**: To eliminate the discrepancy between training and inference phases, we propose to fine-tune the model using the entire reconstruction process, further enhancing performance.\\n\\n- **Controllable Detail Generation**: Inspired by classifier-free guidance (CFG), we propose a method to balance smoothness and sharpness, addressing the fixed-step constraint introduced by FSFT. This approach allows users to explore and customize outputs according to their personal preferences.\\n\\nEquipped with the above components, the proposed RDEIC effectively combines the efficiency of residual diffusion with the powerful generation capability of Stable Diffusion, outperforming existing diffusion-based extreme image compression methods in both reconstruction performance and efficiency.\\n\\n---\\n\\n**Implement details**\\n\\nWe train five RDEIC models with different compression ratios, ranging from 0.02 bpp to 0.12 bpp. **During inference, we use 2 denoising steps for the two models with larger bpp and 5 denoising steps for the remaining three models.** Accordingly, we use 2/5 to represent the denoising steps of RDEIC.\\n\\n---\\n\\nWe hope this clarification provides a better understanding of our motivation, methodology, and contributions. If you have any further questions, please do not hesitate to contact us.\\n\\n**Reference**\\n\\n[1] Zhiyuan Li, Yanhui Zhou, Hao Wei, Chenyang Ge, and Jingwen Jiang. Towards extreme imagecompression with latent feature guidance and diffusion prior. IEEE Transactions on Circuits and Systems for Video Technology, 2024.\\n\\n[2] Marlene Careil, Matthew J. Muckley, Jakob Verbeek, and St\\u00e9phane Lathuili\\u00e8re. Towards image compression with perfect realism at ultra-low bitrates. In The Twelfth International Conference on Learning Representations, 2024.\\n\\n[3] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840\\u20136851, 2020.\"}", "{\"title\": \"Response to Reviewer X8Q6 (Part I)\", \"comment\": \"Thank you for your time and constructive comments. We have revised the manuscript based on your comments and address the weaknesses and questions raised in your review below:\\n\\n---\\n\\n**Response to Weakness 1**:\\n\\nWhile recent works have explored leveraging the robust generative capability of Stable Diffusion for better perceptual quality (e.g, DiffBIR [1] and DiffEIC [2]) or using residual diffusion for acceleration (e.g., ExposureDiffusion [3] and Resshift [4]), none have attempted to combine Stable Diffusion with residual diffusion. To the best of our knowledge, we are the first to successfully integrate Stable Diffusion into a residual diffusion framework.\\n\\n---\\n\\n**Response to Weakness 2**:\\n\\nThank you for raising this concern. We have carefully reviewed our experimental setup and found no issues with the implementation. The observed color deviations in Text+Sketch are expected, as this method reconstructs images solely from sketches and semantic text. This behavior is consistent with the results reported in its original paper. Additionally, the slight brightness bias observed in PerCo-20 is consistent with the results shown in the PerCo paper.\\n\\n---\\n\\n**Response to Weakness 3**:\\n\\nThank you for highlighting this concern. To address it, we have compared our baseline with DiffEIC. Note that the only difference between the two lies in the compression module. As shown in Fig. 6 and Table 2 (left) of the revised manuscript, the negligible differences in performance between DiffEIC and our baseline demonstrate that the choice of entropy model has minimal impact on overall performance. This confirms that the improvements in our method are primarily attributed to the proposed Relay Residual Diffusion (RRD) and Fixed-Step Fine-Tuning (FSFT) strategies, as further validated by the ablation study presented in Fig. 6 and Table 2 (left).\\n\\n>Table 2(left): The impact of RRD and FSFT on performance. Performance is represented by BD-rate (\\\\%), using DiffEIC as the anchor. Distortion metrics include PSNR, MS-SSIM, and SSIM. Perceptual metrics include DISTS, FID, KID, NIQE, and LPIPS. DS denotes the number of denoising steps. 2/5 denotes that we use 2 denoising steps for the two models with larger bpp and 5 steps for the remaining models.\\n| Methods | DS | Distortion | Perception | Average |\\n|----------------|-------|------------:|------------:|---------:|\\n| Baseline | 50 | 7.4 | -1.8 | 2.8 |\\n| +RRD | 2/5 | -31.0 | 12.7 | -9.1 |\\n| +RRD+FSFT | 2/5 | -42.2 | -36.6 | -39.4 |\\n\\n---\\n\\n**Response to Weakness 4**:\\n\\nThank you for this valuable comment. Following your suggestion, we have reselected DiffEIC [2] as the anchor and added detailed notes to the table captions, as shown in Table 2 and Table 3 of the revised manuscript.\\n\\n>Table 3: BD-rate (\\\\%) for different methods on the CLIC2020 dataset with DiffEIC as the anchor. For distortion-oriented methods (i.e., BPG, VVC, and ELIC), we omit their perceptual metrics. The best results are highlighted in **bold**.\\n| Methods | | | Perception | | | | Distortion | | Average |\\n|---|---:|---:|---:|---:|---:|---:|---:|---:|---:|\\n| | DISTS | FID | KID | NIQE | LPIPS | PSNR | MS-SSIM | SSIM | |\\n| BPG | - | - | - | - | - | -66.2 | -32.8 | -40.3 | - |\\n| VVC | - | - | - | - | - | -77.8 | -51.3 | -58.6 | - |\\n| ELIC | - | - | - | - | - | **-82.7** | **-54.6** | **66.7** | - |\\n| HiFiC | 201.8 | 248.2 | 372.6 | -28.7 | 63.4 | -29.1 | 2.7 | 14.7 | 105.7 |\\n| VQIR | 71.8 | 183.9 | 156.7 | 32.4 | 51.3 | 16.4 | 43.9 | 57.8 | 76.8 |\\n| PerCo | 66.1 | 67.6 | 65.1 | 5.2 | 67.7 | 33.9 | 69.2 | 77.7 | 56.6 |\\n| MS-ILLM | 28.5 | 40.9 | 34.6 | **-85.4**| **-44.7**| -75.4 | -44.7 | -38.5 | -21.5 |\\n| RDEIC (Ours)| **-17.9** | **-18.3**| **-22.1**| -83.7| -40.8| -61.3 | -32.7| -32.7| **-38.7** |\\n\\n---\\n\\n**Reference**\\n\\n[1] Xinqi Lin, Jingwen He, Ziyan Chen, Zhaoyang Lyu, Ben Fei, Bo Dai, Wanli Ouyang, Yu Qiao, and Chao Dong. Diffbir: Towards blind image restoration with generative diffusion prior. arXiv preprint arXiv:2308.15070, 2023.\\n\\n[2] Zhiyuan Li, Yanhui Zhou, Hao Wei, Chenyang Ge, and Jingwen Jiang. Towards extreme imagecompression with latent feature guidance and diffusion prior. IEEE Transactions on Circuits and Systems for Video Technology, 2024.\\n\\n[3] Yufei Wang, Yi Yu, Wenhan Yang, Lanqing Guo, Lap-Pui Chau, Alex C Kot and Bihan Wen. Exposurediffusion: Learning to expose for low-light image enhancement. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023.\\n\\n[4] Zongsheng Yue, Jianyi Wang, and Chen Change Loy. Resshift: Efficient diffusion model for image super-resolution by residual shifting. In Thirty-seventh Conference on Neural Information Processing Systems, 2023\"}", "{\"title\": \"Response to Reviewer PdRA\", \"comment\": \"Thank you for your time and constructive comments. We have revised the manuscript based on your comments and address the weaknesses and questions raised in your review below:\\n\\n---\\n\\n**Response to Weakness 1 & Question 1:**\\n\\nFollowing your suggestion, we have included additional ablation experiments on computational complexity. The proposed relay residual diffusion (RRD) framework enables image reconstruction with 5 or even 2 denoising steps, significantly improving the computational efficiency during decoding. As shown in Table 2 (right) of the revised manuscript, incorporating RRD reduces the denoising time by a factor of 10$\\\\times$ to 25$\\\\times$ compared to the baseline:\\n\\n> | Methods | DS | Denoising Time | Speedup |\\n|---------------|:------:|:----------------------:|---------:|\\n| Baseline | 50 | 4.349 \\u00b1 0.013 | 1\\u00d7 |\\n| +RRD | 5 | 0.434 \\u00b1 0.002 | 10\\u00d7 |\\n|+RRD | 2 | 0.173 \\u00b1 0.001 | 25\\u00d7 |\\n\\nAdditionally, the fixed-step fine-tuning strategy is purely a fine-tuning strategy and does not introduce any additional computational overhead during decoding. We have incorporated this discussion into the **Ablations** section.\\n\\n---\\n\\n**Response to Weakness 2 & Question 2:**\\n\\nIn the revised manuscript, we have expanded our comparative analysis by including additional baseline methods, namely the traditional compression standard BPG [1] and the VAE-based compression method ELIC [2]. These additions provide a broader context for evaluating the performance of our approach relative to both traditional and modern learning-based compression techniques.\\n\\n---\\n\\n**Response to Weakness 3 & Question 3:**\\n\\nThank you for raising this concern. To assess the robustness and generalization ability of RDEIC, we have conducted additional experiments on the larger MS-COCO 30k dataset, which comprises 30,000 images spanning a diverse range of categories and content types. This dataset was constructed by selecting the same images from the COCO2017 training set [3] as utilized in PerCo [4].\\n\\nAs shown in Fig. 9 of the revised manuscript, RDEIC maintains consistent performance across this expanded dataset, demonstrating its ability to generalize effectively to unseen data, even in scenarios with more diverse and challenging content. Additionally, visualized examples of reconstructed images are provided in Fig. 16 of the revised manuscript to further illustrate the robustness of our approach.\\n\\nWe have included this discussion in Appendix C of the revised manuscript.\\n\\n---\\n\\n**Reference**\\n\\n[1] Fabrice Bellard. Bpg image format. 2014. URL https://bellard.org/bpg/\\n\\n[2] Dailan He, Ziming Yang, Weikun Peng, Rui Ma, Hongwei Qin, and Yan Wang. Elic: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.5718\\u20135727, 2022.\\n\\n[3] Holger Caesar, Jasper Uijlings, and Vittorio Ferrari. Coco-stuff: Thing and stuff classes in context. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1209\\u20131218, 2018.\\n\\n[4] Marlene Careil, Matthew J. Muckley, Jakob Verbeek, and St\\u00e9phane Lathuili\\u00e8re. Towards image compression with perfect realism at ultra-low bitrates. In The Twelfth International Conference on Learning Representations, 2024.\"}", "{\"summary\": \"The paper presents a novel approach called Relay Residual Diffusion Extreme Image Compression (RDEIC), which improves upon traditional diffusion-based image compression methods. By leveraging compressed latent features and a residual diffusion process, RDEIC enhances fidelity and efficiency, addressing limitations of iterative denoising processes that typically begin with pure noise. Experimental results indicate significant performance gains in compression rates while maintaining image quality.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) Introduces an innovative framework (RDEIC) that improves image compression efficiency.\\n2) Effectively addresses the fidelity issues present in existing diffusion-based methods.\\n3) Provides strong experimental results demonstrating the advantages of the proposed approach.\", \"weaknesses\": \"1) Limited discussion on the computational complexity of the new method.\\n2) Insufficient comparison with a broader range of existing compression techniques.\\n3) Potential overfitting concerns not addressed within the experimental analysis.\", \"questions\": \"1) Include a detailed analysis of the computational efficiency and resource requirements of RDEIC.\\n2) Expand the comparative analysis to include more baseline models and state-of-the-art techniques.\\n3) Address the possibility of overfitting by incorporating additional validation datasets or robustness tests.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Look forward to further discussion\", \"comment\": \"Dear Reviewer 1h9J,\\n\\nThank you once again for dedicating your valuable time to reviewing our paper and for your prompt feedback! We would greatly appreciate it if you could confirm whether our revisions have fully addressed your concerns. If you have any additional comments or questions, we would be more than happy to address them at your convenience.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper introduces Relay Residual Diffusion Extreme Image Compression (RDEIC), a method for high-quality image compression at extremely low bitrates. RDEIC has three main components: (1) it begins denoising with compressed latent features plus noise instead of pure noise, reducing steps and improving fidelity; (2) it introduces a relay residual diffusion process, iteratively removing noise and residuals between compressed and target features, leveraging a pre-trained stable diffusion model for quality reconstruction; and (3) it applies a fixed-step fine-tuning strategy to minimize discrepancies between training and inference, further enhancing quality. Experimental results show that RDEIC achieves state-of-the-art visual quality, surpasses existing diffusion-based methods in fidelity and efficiency, and provides controllable detail generation to balance smoothness and sharpness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Given the effectiveness and complexity of diffusion models, fast diffusion sampling as a practical research approach holds significant value and positively impacts the community.\\n\\n2. The balance between smoothness and sharpness mentioned in the paper provides practical insights into this area. In a given compression state, determining how to map it to the sampling step \\n\\ud835\\udc41 can directly affect reconstruction quality. This mapping relationship is crucial to the model's effectiveness and stability, which the authors have explored in detail.\", \"weaknesses\": [\"1. The novelty of this work is relatively modest, though it provides a valuable practical application in image compression. Many recent studies have explored similar approaches, starting the diffusion process from low-quality images rather than pure noise to enhance efficiency and accelerate sampling. Integrating degraded image embeddings into a pre-trained diffusion model as a plug-and-play module is also a relatively well-explored approach in the field of image processing. Prior works include:\", \"[1] Lin, X., He, J., Chen, Z., Lyu, Z., Dai, B., Yu, F., ... & Dong, C. (2023) in \\\"DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior\\\",\", \"[2] Wang, Y., Yu, Y., Yang, W., Guo, L., Chau, L. P., Kot, A. C., & Wen, B. (2023) in \\\"ExposureDiffusion: Learning to Expose for Low-Light Image Enhancement\\\" (ICCV),\", \"[3] Ma, J., Zhu, Y., You, C., & Wang, B. (2023) in \\\"Pre-trained Diffusion Models for Plug-and-Play Medical Image Enhancement\\\" (MICCAI).\", \"2. The Text-Sketch in Figure 1 and Figure 5 shows significant deviations in chroma reconstruction. I am unsure whether this is due to the baseline itself or if there was a mix-up between RGB and BGR channels during the experimental preprocessing stage. Additionally, the brightness of PerCo-20 in Figure 1 appears to be slightly biased compared to the ground truth. It is recommended to carefully examine the methods used for comparison, especially when the baselines are highly novel, and when results show noticeably unusual behavior, to ensure a fairer comparison.\", \"3. Potential issue with variable control in the entropy model. The paper employs unusual entropy models (i.e., VQ-E and VQ-D) without adequate control or detailed explanation. This may lead to comparison results that do not accurately reflect the primary contribution of the proposed approach when contrasted with other algorithms, given that the precision of entropy models directly impacts compression efficiency and reconstruction quality.\", \"4. Ambiguity in baseline selection. In Table 1 and Line 354, using \\u201cOurs\\u201d as the baseline results in a row of zeros, which may lead to ambiguity and does not align with traditional statistical practices (which typically use a control group as the baseline). It is advisable to clarify the baseline in the caption or table notes. Additionally, selecting a well-recognized baseline (e.g., JPEG, BPG, or a state-of-the-art compression method) for BD-rate comparison would provide a more straightforward understanding of the relative performance of each method.\", \"5. Scoring issue with implementation versions. In Lines 442-443, the authors mention two implementation versions, yet both report a BD-rate of 0, which may cause confusion. It is recommended to provide a detailed explanation of the different implementations and clarify the reason for the BD-rate of 0 in each case.\", \"6. Suggestions for improving formula clarity:\", \"Clarity in the derivation from Eq.2 to Eq.4. The derivation from Eq.2 and Eq.3 to Eq.4 is crucial for the model\\u2019s structure but is not immediately clear. This derivation could directly impact the model's efficiency and accuracy. It is recommended to provide a more detailed explanation of these key steps in the main text to enhance understanding.\", \"Ambiguity in the Definition of Eq.11. In traditional diffusion models (e.g., DDPM and Stable Diffusion), the noise estimator typically predicts total noise rather than noise at specific frequency bands. Interpreting $ \\\\epsilon_{sd}(z_n, n) $ directly as a \\\"low-frequency component\\\" may lack theoretical support, especially without a clear basis for frequency division. The decomposition of predicted noise into low- and high-frequency components might be a heuristic approach, but further justification is needed to establish its rigor.\", \"Undefined $ l_p $ in Eq.9. The definition of $l_p $ in Eq.9 is unclear. To improve understanding, it would be helpful for the authors to clearly specify the meaning of $ l_p $ and provide relevant context.\", \"7. Minor formatting and typographical suggestions.\", \"Line 100: Add commas before and after \\\"i.e.\\\" for clarity.\", \"Lines 220, 229, and 238: Add commas at the end of formulas to improve readability.\"], \"questions\": \"1. Definition of \\\"Extremely Low Bitrates\\\". The standard for \\\"extremely low bitrates\\\" lacks a precise definition. Given varying content distributions (scenes) and the amount of high-frequency details, might \\\"extremely low\\\" have different thresholds? How would one define this threshold? Could the authors discuss the broader application potential of encoding methods in bandwidth-constrained scenarios? Additionally, does diffusion lose its value in compression at higher and medium bitrates?\\n\\n2. Codebook Details. The approach involving \\\"vector-quantized latent image representations\\\" is intriguing. Could the authors elaborate on the learning and training process of the codebook loss? Specifically, how is the codebook initialized, and what is the interaction between the codebook and $ l_p$?\\n\\n3. Since the multi-step sampling mechanism in diffusion leads to increased computational complexity in decoding, would placing diffusion in the encoding part or within the hyperprior yield different conclusions regarding complexity?\\n\\n4. Role of the diffusion mechanism. Is diffusion effective mainly as a post-processing module to enhance perceptual quality, or does it also contribute to compact representation? A deeper analysis of the role of diffusion in improving perceptual quality versus compact representation would be insightful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 1h9J\", \"comment\": \"Thanks for your prompt response! We address your concerns accordingly.\\n\\n---\\n\\n_According to the third innovation you gave, DiffEIC can also control the details of content generation, please elaborate on the difference._\\n\\nDiffEIC effectively controls the details of content generation by adjusting the number of denoising steps. However, it requires a large number of denoising steps (e.g., 50 steps) to achieve reconstructions with more details, which significantly increases computational cost.\\n\\nIn contrast, the proposed controllable detail generation method allows for controlling the details of content generation without increasing the number of denoising steps, providing a more flexible and efficient approach. Specifically, at each denoising step, the predicted noise is decomposed into a low-frequency control component and a high-frequency control component, and we control the details of content generation by adjusting the intensity of the high-frequency control component.\\n\\n---\\n\\n_RDEIC almost duplicates GLC's encoder structure, which is another difference from DiffEIC. In order to fairly compare performance with DiffEIC, the performance gain from this change needs to be given._\\n\\nThank you for highlighting this concern. To address it, we have compared our baseline with DiffEIC. Note that the only difference between the two lies in the compression module. As shown in Fig. 6 and Table 2 (left) of the revised manuscript, the negligible differences in performance between DiffEIC and our baseline demonstrate that the choice of compression module has minimal impact on overall performance. This confirms that the improvements in our method are primarily attributed to the proposed Relay Residual Diffusion (RRD) and Fixed-Step Fine-Tuning (FSFT) strategy, as further validated by the ablation study presented in Fig. 6 and Table 2 (left).\\n\\n---\\n\\n_Why is there a loss on the perception when only RRD is used?_\\n\\nAs shown in Fig. 7 of the revised manuscript, reducing the number of denoising steps inevitably leads to a decline in perceptual performance. With RRD, we use only 2 or 5 denoising steps, which is significantly fewer than the 50 steps used by the baseline. Therefore, the perceptual performance loss is expected and considered acceptable given the substantial improvement in efficiency.\\n\\n---\\n\\n_It is recommended to add the performance of the RDEIC to the table so that the performance gains due to the inconsistency of the encoder can be seen._\\n\\nThanks for your valuable comment. \\\"+RRD+FSFT\\\" represents RDEIC in this table, which is the complete version of our proposed method. Are you referring to adding the performance of DiffEIC to the table for comparison? If so, we will revise the manuscript accordingly:\\n\\n>Table 2(left): The impact of RRD and FSFT on performance. Performance is represented by BD-rate (\\\\%), using DiffEIC as the anchor. Distortion metrics include PSNR, MS-SSIM, and SSIM. Perceptual metrics include DISTS, FID, KID, NIQE, and LPIPS. DS denotes the number of denoising steps. 2/5 denotes that we use 2 denoising steps for the two models with larger bpp and 5 steps for the remaining models.\\n| Methods | DS | Distortion | Perception | Average |\\n|----------------|-------|------------:|------------:|---------:|\\n| DiffEIC | 50 | 0 |0 | 0 |\\n| Baseline | 50 | 7.4 | -1.8 | 2.8 |\\n| +RRD | 2/5 | -31.0 | 12.7 | -9.1 |\\n| +RRD+FSFT | 2/5 | -42.2 | -36.6 | -39.4 |\\n\\nIt is evident that replacing the compression module with GLC does not result in performance improvements. Its primary contribution lies in slightly improving the encoding speed, as shown in Table 1 of the revised manuscript. This confirms that the enhancements in our method are primarily attributed to the proposed Relay Residual Diffusion (RRD) framework and Fixed-Step Fine-Tuning (FSFT) strategy.\"}", "{\"title\": \"Response to Reviewer Ejkn (Part I)\", \"comment\": \"Thank you for your time and constructive comments. We have revised the manuscript based on your comments and address the weaknesses and questions raised in your review below:\\n\\n---\\n\\n**Response to Weakness 1**: \\n\\nThank you for pointing out these relevant references. SeeSR [1] and PASD [2] embed the LR latent into the initial random noise at the terminal diffusion timestep N (1000) during inference, but still require numerous denoising steps for reconstruction (e.g., 50 in SeeSR and 20 in PASD). Resshift [3] constructs a Markov chain that transfers between degraded and target features by shifting the residual between them, substantially improving transition efficiency. However, ResShift's redesigned diffusion equation and noise schedule prevent it from leveraging the robust generative capability of pre-trained Stable Diffusion.\\n\\nIn contrast, our RDEIC directly derives a novel residual diffusion equation from Stable Diffusion\\u2019s original diffusion equation, enabling seamless integration of per-trained Stable Diffusion to leverage its robust generative capability. To the best of our knowledge, this is the first successful integration of Stable Diffusion into a residual diffusion framework.\\n\\nWe have incorporated this discussion into the **Related Work** section to clarify the novelty of our approach.\\n\\n---\\n\\n**Response to Weakness 2**:\\n\\nThanks for your insightful comment. Residual diffusion allows us to construct the starting point using a smaller timestep $N$ (300) instead of the terminal diffusion timestep $T$ (1000). As shown in Fig. 2(b), the resulting $\\\\boldsymbol{z}_N$ retains most of the information from the compressed features $\\\\boldsymbol{z}_c$, providing a strong foundation for detail generation. Additionally, starting from $N = 300$ naturally avoids the randomness and error accumulation associated with sampling from $n = 1000$ to $n = 300$.\\n\\nFor Stable Diffusion, we leverage its robust generative capability to achieve high perceptual reconstruction at extremely low bitrates. It is important to clarify that Stable Diffusion is not used to resolve the randomness caused by pure noise or improve fidelity but rather to enhance the perceptual quality of the reconstruction.\\n\\nWe have rephrased this content in the third paragraph of Sec. 1 to improve clarity and presentation.\\n\\n---\\n\\n**Response to Weakness 3**:\\n\\n1. For Stable Diffusion, we also start from $\\\\boldsymbol{z}_N$ rather than pure noise. Since $\\\\boldsymbol{z}_N$ retains most of the information from the compressed feature and no additional control conditions (e.g., text) are applied, the direct output of Stable Diffusion is low-frequency images lacking high-frequency details.\\n\\n2. Our controllable detail generation method (Eq. (11)) aligns in form with classifier-free guidance (CFG). At each denoising step, the predicted noise is decomposed into two components, and the reconstruction is controlled by adjusting the guidance scale $\\\\lambda_s$. In this framework, Stable Diffusion corresponds to the case where $\\\\lambda_s = 0$.\\n\\n---\\n\\n**Reference**\\n\\n[1] Rongyuan Wu, Tao Yang, Lingchen Sun, Zhengqiang Zhang, Shuai Li, and Lei Zhang. Seesr: Towards semantics-aware real-world image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 25456\\u201325467, 2024.\\n\\n[2] Tao Yang, Rongyuan Wu, Peiran Ren, Xuansong Xie, and Lei Zhang. Pixel-aware stable diffusion for realistic image super-resolution and personalized stylization. arXiv preprint arXiv:2308.14469, 2023.\\n\\n[3] Zongsheng Yue, Jianyi Wang, and Chen Change Loy. Resshift: Efficient diffusion model for image super-resolution by residual shifting. In Thirty-seventh Conference on Neural Information Processing Systems, 2023\"}", "{\"title\": \"Response to Reviewer Ejkn (Part II)\", \"comment\": \"**Response to Question 1:**\\n\\nDiffusion-based extreme image compression methods are known for their exceptional performance in perceptual quality but often struggle to achieve high fidelity. As shown in Fig. 4 of the revised manuscript, diffusion-based approaches (solid lines) generally outperform other methods (dashed lines) in perceptual quality while exhibiting lower scores on fidelity metrics such as PSNR, MS-SSIM, and SSIM.\\n\\nWithin this context, the proposed RDEIC achieves notable fidelity improvements compared to existing diffusion-based methods, such as DiffEIC [1] and PerCO [2]. These improvements align with our stated motivation to address the fidelity limitations of diffusion-based extreme image compression methods. While the fidelity of our RDEIC may not yet surpass that of traditional or other learning-based methods, it remains a meaningful step forward in improving fidelity of diffusion-based approaches. \\n\\n---\\n\\n**Response to Question 2:**\\n\\nFirst, in image compression, it is common practice not to specify which model\\u2019s results are used when presenting comparative experiments, as each model corresponds to a different compression ratio (measured in bpp ). By default, results from all models are included to provide a comprehensive comparison. For instance, each point on the performance curves in Fig. 4 represents a model trained with a specific $\\\\lambda_r$ value. Additionally, in Table 2 and Table 3 of the revised manuscript, the BD-rate (%) is calculated based on the performance of all models.\\n\\nSecond, in all experiments, we set the guidance scale $\\\\lambda_s=1$ by default unless otherwise specified. This clarification has been included in the revised manuscript.\\n\\n---\\n\\n**Response to Question 3:**\\n\\nAs stated in the clarification, 2/5 indicates that we use 2 denoising steps for the two models corresponding to larger bpp values and 5 steps for the remaining three models. The performance points from these five models collectively form the performance curve, serving as the anchor for comparison. In this table, distortion metrics include PSNR, MS-SSIM, and SSIM, while perceptual metrics include DISTS, FID, KID, NIQE, and LPIPS. \\n\\nFollow your suggestion, we have added detailed notes to the table caption to clarify this information and included the performance curves in Fig. 6 of the revised manuscript to provide a clearer demonstration of the results.\\n\\n> Table 2: The impact of RRD and FSFT on performance (left) and speed (right). Performance is represented by BD-rate (\\\\%), using DiffEIC-50 as the anchor. Distortion metrics include PSNR, MS-SSIM, and SSIM. Perceptual metrics include DISTS, FID, KID, NIQE, and LPIPS. DS denotes the number of denoising steps. 2/5 denotes that we use 2 denoising steps for the two models with larger bpp and 5 steps for the remaining models. FSFT is a fine-tuning strategy that does not affect speed.\\n| Methods | DS | Distortion | Perception | Average | | Methods | DS | Denoising Time | Speedup |\\n|---------------|:------:|------------:|------------:|---------:|----------|---------------|:------:|:----------------------:|---------:|\\n| Baseline | 50 | 7.4 | -1.8 | 2.8 | | Baseline | 50 | 4.349 \\u00b1 0.013 | 1\\u00d7 |\\n| +RRD | 2/5 | -31.0 | 12.7 | -9.1 | | +RRD | 5 | 0.434 \\u00b1 0.002 | 10\\u00d7 |\\n| +RRD+FSFT | 2/5 | -42.2 | -36.6 | -39.4 | |+RRD | 2 | 0.173 \\u00b1 0.001 | 25\\u00d7 |\\n\\n---\\n\\n**Response to Question 4:**\\n\\nThanks for pointing out this issue. For the two models corresponding to larger bpp values, we use 2 denoising steps, while for the remaining three models, we use 5 denoising steps. As shown in Fig. 4 of the revised manuscript, the performance points of these five models with different denoising steps collectively form the performance curve, serving as the anchor for calculating the BD-rate. To avoid unnecessary confusion, we have removed this column from Table 1 of the revised manuscript.\\n\\n---\\n\\n**Reference**\\n\\n[1] Zhiyuan Li, Yanhui Zhou, Hao Wei, Chenyang Ge, and Jingwen Jiang. Towards extreme imagecompression with latent feature guidance and diffusion prior. IEEE Transactions on Circuits and Systems for Video Technology, 2024.\\n\\n[2] Marlene Careil, Matthew J. Muckley, Jakob Verbeek, and St\\u00e9phane Lathuili\\u00e8re. Towards image compression with perfect realism at ultra-low bitrates. In The Twelfth International Conference on Learning Representations, 2024.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thanks for your response. I still have the following concerns:\", \"regarding_response_to_weakness_1\": \"-- According to the third innovation you gave, DiffEIC can also control the details of content generation, please elaborate on the difference.\\n\\n-- RDEIC almost duplicates GLC's encoder structure, which is another difference from DiffEIC. In order to fairly compare performance with DiffEIC, the performance gain from this change needs to be given.\", \"regarding_response_to_weakness_2\": \"--Why is there a loss on the perception when only RRD is used?\\n\\n--It is recommended to add the performance of the RDEIC to the table so that the performance gains due to the inconsistency of the encoder can be seen.\"}", "{\"title\": \"Response to Reviewer 1h9J\", \"comment\": \"Thank you for your time and constructive comments. We have revised the manuscript based on your comments and address the weaknesses and questions raised in your review below:\\n\\n---\\n\\n**Response to Weakness 1**: \\n\\nAs stated in the clarification, our focus is on improving the diffusion process and training strategy rather than proposing a novel pipeline. In summary, our primary innovation are as follows:\\n\\n- **Relay Residual Diffusion** : We propose a relay residual diffusion that effectively combines the efficiency of residual diffusion with the powerful generation capability of Stable Diffusion. **To the best of our knowledge, we are the first to successfully integrate Stable Diffusion into a residual diffusion framework. **\\n\\n- **Fixed-Step Fine-Tuning**: We design a fixed-step fine-tuning strategy that eliminates the discrepancy between training and inference, significantly enhancing reconstruction performance.\\n\\n- **Controllable Detail Generation**: We introduce a controllable detail generation method that enables a trade-off between smoothness and sharpness, allowing users to adjust the reconstruction results according to their preferences.\\n\\nGiven these contributions, we believe it is inappropriate to assess the innovation of this work solely based on the pipeline structure. Regarding comparisons with GLC, we regret that we could not include results due to the unavailability of GLC\\u2019s official code. However, our comparisons with PerCo, DiffEIC, and other methods are sufficient to demonstrate the superiority of our proposed RDEIC.\\n\\n---\\n**Response to Weakness 2**: \\n\\nThanks for your comment. RDD is not mentioned in this paper; did you mean RRD? \\n\\nOur RDEIC uses the same generative prior (Stable Diffusion) as DiffEIC. RRD is not a ``better generative model'' but rather the novel relay residual diffusion framework proposed in this paper, which combines the efficiency of residual diffusion with the robust generative capability of Stable Diffusion. Referring to Table 2(left) of the revised manuscript, applying the proposed RRD to DiffEIC would indeed improve its performance, as the difference between DiffEIC and our Baseline lies only in the compression module.\\n\\n>Table 2(left): The impact of RRD and FSFT on performance. Performance is represented by BD-rate (\\\\%), using DiffEIC as the anchor. Distortion metrics include PSNR, MS-SSIM, and SSIM. Perceptual metrics include DISTS, FID, KID, NIQE, and LPIPS. DS denotes the number of denoising steps. 2/5 denotes that we use 2 denoising steps for the two models with larger bpp and 5 steps for the remaining models.\\n| Methods | DS | Distortion | Perception | Average |\\n|----------------|-------|------------:|------------:|---------:|\\n| Baseline | 50 | 7.4 | -1.8 | 2.8 |\\n| +RRD | 2/5 | -31.0 | 12.7 | -9.1 |\\n| +RRD+FSFT | 2/5 | -42.2 | -36.6 | -39.4 |\\n\\n---\\n**Response to Weakness 3**: \\n\\nThank you for you insightful comment. In the revised manuscript, we have selected more appropriate visualization results in Fig. 1 and Fig. 5 to better illustrate the advantages of RDEIC.\\n\\n---\\n**Response to Weakness 4**: \\n\\nIn the revised manuscript, we include the reconstruction performance of the SD autoencoder in Fig. 4 (indicated by the black horizontal line), which represents the theoretical upper limit of RDEIC\\u2019s performance. For bpp $>$ 0.06, the SD autoencoder performs worse than MS-ILLM in terms of NIQE, which explains why RDEIC also underperforms MS-ILLM on this metric.\\n\\nAdditionally, as NIQE is a no-reference metric, a lower NIQE score does not always indicate better reconstruction quality in the context of extreme image compression. For instance, Text+Sketch achieves the best NIQE score but produces reconstructions that significantly deviate from the original image.\"}" ] }
0T8vCKa7yu
LLM Compression with Convex Optimization—Part 1: Weight Quantization
[ "Sean I. Young" ]
In recent years, compression of large language models (LLMs) has emerged as an important problem to enable language model deployment on resource-constrained devices, reduce computational costs, and mitigate the environmental footprint of large-scale AI infrastructure. In this paper, we lay down the foundation for LLM quantization from a convex optimization perspective and propose a quantization technique that builds on this foundation for optimum quantization outcomes. Our quantization framework, CVXQ, scales to models containing hundreds of billions of weight parameters and provides users with the flexibility to compress models to any specified model size, post-training. A reference implementation of CVXQ can be obtained from.
[ "weight quantization", "model compression", "large language models" ]
Reject
https://openreview.net/pdf?id=0T8vCKa7yu
https://openreview.net/forum?id=0T8vCKa7yu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ykAVRtZgl8", "uZP2tQnaoU", "swWdo4NeNI", "rqCK5aA5aZ", "pw3HGz4bXy", "ot3jOpWku0", "oIxbJhBXdc", "gHZFGdg1Gs", "f4WBwEdbP4", "c9dkRGcN48", "bOugxUcepR", "XvFYbNkRh9", "XvD7sAPfD2", "WD0PgzTyzg", "RuxpBnZGRC", "R2IQK9YGUO", "PYlS1HGT9s", "GvdyTRQUX4", "CNTRH7FqaW", "C9ckKk9k5i", "BlyQmunaZF", "AYVesni1Bi", "AQMGoC19Ry", "8gQQXQEEJk", "66ugfrgcAJ", "2BX5NNYhpc", "049fQPMfCJ" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732506821235, 1732508008452, 1730637100090, 1732463491460, 1731527698735, 1732505316738, 1730614807180, 1732496518543, 1732499248406, 1732279610031, 1734709756751, 1731527615021, 1732490749500, 1731527656870, 1732477670371, 1732735337029, 1737523424921, 1730679668608, 1732705695990, 1732529631143, 1732504561316, 1732501247861, 1732498928745, 1729428954670, 1732343065192, 1732499878514, 1732529094946 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_Pd64" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_Pd64" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_EzZu" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_z2t1" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_Pd64" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_7R2x" ], [ "ICLR.cc/2025/Conference/Submission954/Area_Chair_J355" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_EzZu" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_z2t1" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_EzZu" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_EzZu" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_7R2x" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Authors" ], [ "ICLR.cc/2025/Conference/Submission954/Reviewer_EzZu" ] ], "structured_content_str": [ "{\"comment\": \"*\\\"Could you please provide the Common Sense Reasoning (CSR) or MMLU results for the comparison between the proposed method and other state-of-the-art baselines such as QuIP?\\\"*\\n\\nQuIP CSR results are included in Table 5. Note that the Llama-x code by QuIP authors is broken (see e.g. https://github.com/Cornell-RelaxML/QuIP/issues/15), and we can only include the Llama-2 70B numbers for WikiText2, C4, Arc E, PIQA reported by QuIP authors.\"}", "{\"comment\": \"*\\\"Compressing LLMs: The Truth is Rarely Pure and Never Simple, ICLR 2024, https://openreview.net/forum?id=B9klVS7Ddk*\\\"\\n\\nThank you, we now reference this paper in the discussion section (ln 529) where we state that both perplexity and downstream accuracy measures (Common Sense QA in our case) are equally important for measuring accuracy.\"}", "{\"summary\": \"The authors introduce a quantization framework called CVXQ, which first optimizes bit depth assignment and then refines step sizes and biases using convex optimization techniques. To further improve the quantization scheme, the framework incorporates matrix partitioning, dividing the matrix into a set of row or column sub-matrices, each with its own bit depth and step size. The experiments are conducted on Meta's LLaMA and OPT models, using PPL and GSM8K as evaluation metrics.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The authors derive several mathematical formulations for the quantization scheme, making a few assumptions about weight distributions, such as Normal or Laplace. They use figures to illustrate whether the statistical data from the OPT models align with these assumed distributions.\", \"weaknesses\": \"The main concern with this manuscript is that it does not address practical hardware constraints. Specifically, the authors permit each weight to have a different bit depth assignment, a strategy that is rarely seen in existing literature. For instance, AWQ employs dedicated kernels and uniformly quantizes all weights to 4 bits, aligning with the availability of a 4-bit engine. However, the manuscript lacks discussion on hardware acceleration or performance degradation resulting from the proposed quantization scheme.\\n\\nBy neglecting hardware-related considerations, the comparisons with previous works may appear unfair. Well-established quantization methods like OWQ, AWQ, or RTN explicitly demonstrate how their quantized models achieve latency improvements on common GPUs. In contrast, this manuscript explores more complex ideas, such as pruning and matrix partitioning, without addressing the impact on parallelism or the hardware requirements these approaches would entail.\\n\\nIt is crucial to describe the limitations of the quantization scheme for practical hardware implementation. Without doing so, methods that account for hardware acceleration might seem inadequate, despite the practical challenges associated with mixed precision or varying bit depth assignments.\\n\\nFor example, the authors should clarify how different bit-depth assignments would affect matrix multiplication kernels as batch size increases, as this could have a significant impact on performance.\\n\\nIn summary, the major concerns are: 1) the lack of considerations for hardware acceleration; 2) the use of configurations, such as varying bit depths, that seem impractical and create unfair comparisons with prior work; and 3) the need for a reevaluation of experimental results, given that the proposed quantization schemes operate under fundamentally different assumptions.\", \"questions\": \"Please see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Unfortunately, it appears that the authors have not implemented dequantization kernels in practice. In the Nvidia ecosystem, dequantization is typically executed on CUDA cores, which offer a lot lower throughput compared to Tensor cores. As a result, significant efforts are being made by practitioners to reduce dequantization overhead on CUDA cores through innovative techniques. For instance, recent FP6 kernels illustrate how engineers are working to improve dequantization throughput, even for large batch sizes.\\n\\nThe authors are strongly encouraged to study the practical advancements in recent dequantization kernels.\\n\\nWhile I maintain my original score, I am strongly inclined toward rejection.\"}", "{\"comment\": \"Apologies if my tone seems direct; it's meant purely for clarity.\\n__________\\n*1. \\\"Mixed-precision quantization is a well-researched field ... highlight how its method differs from existing techniques ...\\\"*\\n\\nWhile mixed-precision quantization has previously been explored (Wang et al., 2019; Chen et al., 2021; Lee et al., 2024; Dettmers et al., 2023), these methods assign different bit depths from a limited set of bit-depth options (e.g., 4 or 16 bits) or only across different layers. This is due to combinatorial nature of mixed bit-depth assignment. This limits the attainable quantized model accuracy especially for LLMs with hundreds of billions of parameters.\\n\\nIn contrast, we formulate bit-depth assignment as a convex optimization problem. This allows us to overcome the combinatorial challenges faced by prior methods and to achieve true mixed-precision quantization at an arbitrary level of granularity (per-channel or per-layer) with a wider range of bit depth options, ({0, 1, 2, 3, 4, 5, 6, 7, 8}). This leads to optimal model quantization tailored specifically to the demands of each channel or layer. \\n\\n(These paragraphs now in the revised manuscript.)\\n__________\\n*\\\"... why it chose to compare solely with LLM quantization methods ...*\\\"\\n\\nOur paper is on LLM quantization. As such, we benchmark against state-of-the-art LLM quantization techniques, including mixed-precision and fixed-precision methods. These methods are also compared against in other LLM quantization works, ensuring a robust and contextually relevant evaluation of our method.\\n__________\\n*\\\"2. ... Group quantization is not a new concept ...*\\\"\\n\\nWe meant to say that per-channel mixed precision works well with the grouping mechanism; see Table 2(c). The revised manuscript now correctly attributes the grouping mechanism to GPTQ and AWQ (Frantar et al., 2022; Lin et al., 2024).\\n__________\\n*\\\"3.. ... The writing needs improvement. The definition of \\\"part-1\\\" in the title is unclear ...\\\"*\\n\\nThank you, we will revise/shorten the title per your suggestion.\\n__________\\n*\\\"4. ... The convex optimization formulation proposed seems flawed. For instance, in equation three, f(X) is not convex ...\\\"*\\n\\n__Not true.__ We never say or imply the network model $f$ is convex. It is the optimization objective $d$ that is convex with respect to continuous variables $B_1,\\\\dots,B_N$. Objective $d$ is convex by construction since $f$ is linear(-ized) as in Hassibi and Stork (1992) and the MSE loss is convex. See eq. (5) and Appendix A for details. \\n__________\\n*\\\"5. ... The utility of mixed precision within a matrix is unclear ... Most mixed-precision quantizations occur between layers, not within a matrix.*\\\"\\n\\nOur work, as well as GPTQ, AWQ and OWQ (Frantar et al., 2022; Lin et al., 2024; Lee et al., 2024) are examples of weight-only quantization methods. None of these methods / their kernels perform arithmetic in 3 or 4 bits. At inference time, weights are de-quantized back into float16 so that they can multiply with float16 activations. This still amounts to acceleration because quantized weights can essentially travel faster through the memory hierarchy (registers\\u2013L1 cache\\u2013L2 cache\\u2013global). If weights must be dequantized, there is no need to insist upon mixed-precision quantization only across layers; see (Dettmers et al., 2022; Lee at al., 2024) for other examples of channel-wise mixed precision quantization. Our response to Pd64 clarifies this further.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"> Is your finding about MMLU vs perplexity published anywhere? Is it citable? None of our baseline methods report on MMLU. It is not good scholarly practice to make an anecdotal statement especially during a review.\\n\\nI apologize for omitting a reference paper in my previous comment. I understand that this may have caused some confusion. Therefore, I would like to introduce the relevant paper to support my argument.\\n\\n* Compressing LLMs: The Truth is Rarely Pure and Never Simple, ICLR 2024, https://openreview.net/forum?id=B9klVS7Ddk\"}", "{\"summary\": \"This paper presents a framework for efficient handling of large language models (LLMs) by (1) determining mixed-precision quantization at layer or group levels to meet a target bitwidth and (2) proposing a novel method for deciding quantization step sizes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed techniques are well-grounded in theory, and each aspect of the framework appears logically sound and justifiable.\", \"weaknesses\": \"The paper introduces a mixed-precision approach, but comparisons are primarily made with uniform-precision quantization methods. A broader survey and comparison with other mixed-precision methods, addressing their strengths and weaknesses, would provide a stronger context for evaluating the proposed method.\\n\\nAn ablation study is needed. According to Z-Fold [1], step size determination methods like Min-Max, MMSE, and Hessian-based approaches are often used in quantization. A comparative analysis showing the effectiveness of the proposed method against these would strengthen the evaluation.\\n\\nSeparating the processes of bit-precision allocation and the quantization algorithm applied could provide clearer insights into each aspect of the method.\\n\\nThe proposed methodology is reasonable but lacks comparative analysis, which would underscore its relative advantages.\\n\\n\\nTesting on a wider range of models and benchmarks would further validate the generalizability of the proposed approach.\\n\\n\\n\\n\\n[1] Jeon et al. \\\"A frustratingly easy post-training quantization scheme for llms.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\", \"questions\": \"The paper claims that the proposed algorithm completes the quantization quickly, yet a lack of experimental or theoretical analysis supports this assertion. Could the authors provide more evidence or discussion on this aspect?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal summary\", \"comment\": \"The reviewers raised two main concerns: 1) our proposed quantization method would not accelerate matrix-vector multiply on GPUs (Pd64), and 2) the accuracy of our quantized models should also be compared with e.g. QuIP on question-answering tasks (EzZu) along with extra ablation (z2t1). We believe we addressed both of these in a clear and reasonable manner.\\n\\nFor 1), we showed that our custom CUDA kernel leads to a 3.3\\u20133.8x acceleration for the weight matrices of e.g. the OPT-175B model quantized to 3 bits per weight on average, relative to using FP16 with cuBLAS. This speed-up was measured on an Nvidia A6000 (ln 535). Our non-model-specific CUDA kernel code (~60 lines) and a short commentary are provided in Appendix A. At the last minute, Pd64 also requested a justification for our code design as well as expected performance across different hardware \\\"options\\\". We believe that this second request falls outside the scope of the current paper, which proposes a mathematical framework for model weight quantization.\\n\\nFor 2), we compared all quantization methods (proposed + 7 baselines) on three Llama-2 models against five more downstream tasks (Arc-E, Arc-C, HellaSwag, PIQA and Winogrande), showing that our method produces the highest quantized model accuracy across the majority of these tasks (table 5). Additional ablation study is included in table 2 (d). Some QuIP results are unavailable due to issues with the official QuIP code. EzZu is understandably not satisfied with this partial result, but availability of other people's code is beyond the authors' control.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"There is a significant lack of detail regarding the kernel design. What are the specific hardware choices considered for the kernel design? Is it A100, A6000, H100, or something else? Additionally, what is the expected performance across other hardware options? What are the limitations of the proposed kernels, and how does batch size impact performance?\\n\\nI recommend referring to other documents to see how kernel performance is typically demonstrated.\\n\\nI also strongly align with the feedback provided by the other reviewers.\"}", "{\"title\": \"Thanks for your expanation\", \"comment\": \"Dear author,\\n \\nThanks for your detailed explanation. I still have a question about your paper: What is the biggest gain through modeling the quantization problem as a convex problem instead of achieving some accuracy improvement (which seems less important since nowadays lots of methods can finetune the accuracy on a certain dataset)? I think making this problem more clearly would be helpful for this research. \\n\\nThanks!\"}", "{\"metareview\": \"This papper introduces a convex optimization-based framework (CVXQ) for mixed-precision quantization in large language models, promising more efficient compression and competitive performance. While the theoretical approach seems solid and results on perplexity and some downstream tasks appear encouraging, all reviewers raised similar concerns.\\n\\nKey issues include a lack of thorough hardware implementation details and uncertainty about real-world practicality. Comparisons against other state-of-the-art mixed-precision methods are insufficient, leaving it unclear if CVXQ truly outperforms strong existing baselines. Also, the evaluation focuses too narrowly on perplexity and would benefit from more diverse benchmarks.\\n\\nDespite the authors\\u2019 efforts in their rebuttal, these concerns remain partially unadressed. The method\\u2019s potential is acknowledged, but without more robust comparisons and concrete hardware insights, it is hard to judge its true impact. Unfortunetly, the reviewers collectively feel the paper is not ready for acceptance at this time.\", \"additional_comments_on_reviewer_discussion\": \"While authors adressed hardware implementation concerns and added ablation studies, reviewers remained skeptical about three core issues: incomplete comparisons with state-of-the-art methods (particularly QuIP), insufficient hardware implementation details across different architectures, and limited evaluation metric coverage. The discussion suggests authors made earnest efforts to address concerns through new experiments and clarifications, but couldn't fully resolve the fundamental concerns about practical implementation and comprehensive benchmarking.\"}", "{\"comment\": \"Apologies if my tone seems direct; it's meant purely for clarity.\\n_________________________\\n*\\\"Lacks comparison with existing LLM quantization methods such as FlexRound [1] and QuIP [2].\\\"*\\n\\nFlexRound does not have a publicly available code and its results are not reproducible. The revised manuscript lists QuIP results in Tables 1 and 3 (ln 398, 394, 441, 447). Note that the official QuIP code does not work correctly on Llama-2 models (a known issue reported on the QuIP GitHub), producing perplexities higher than RTN. Out of respect for the authors of QuIP, we do not report these QuIP results on Llama-2 models.\\n_________________________\\n*\\\"Primarily evaluates LLM performance using perplexity, with insufficient comparison across other metrics like MMLU and AlpacaEval.\\\"*\\n\\nWe have some concerns with MMLU and AlpacaEval, as they are not very widely used, not even in the FlexRound and QuIP works that you refer to. So, we include one more perplexity metric (on C4) as well as the following new QA metrics: Arc (Challenge), Arc (Easy), HellaSwag, PIQA, and Winogrande. These are popularly used in other model compression papers. The revised manuscript shows these results in the Table 5.\\n__________________________\\n*\\\"Insufficient discussion ... accelerated on existing hardware such as GPUs.\\\"*\\n\\nOur custom CUDA kernel achieve 3.3\\u20133.8x acceleration for matrix-vector multiplication. The revised manuscript discusses this (ln 41, 532) and lists the kernel code in Appendix A.\\n__________________________\\n*\\\"Tables 1 and 2 lack information on the average bit depth achieved by CVXQ ... may not exactly match the user-specific quantization bit depth ...\\\"*\\n\\nOur convergence tolerance is $10^{-6}$ bits (ln 233). The actual average bit depths achieved by CVXQ were 3.999999\\u20134.000001 (for 4-bit models) and 2.999999\\u20133.000001 (for 3 bit models).\\n__________________________\\n*\\\"What do the terms \\\"row\\\" and \\\"column\\\" mean in the context of row and column partitioning in Figure 3?\\\"*\\n\\nRow (resp. column) refers to the average bit depth savings achieved when assigning separate bit depths to rows (resp. columns) of each weight matrix. This is also stated more clearly in the revised manuscript.\\n__________________________\\n*\\\"What units were used for clustering in Tables 1 and 2?\\\"*\\n\\nLn 356 states that cluster sizes of 512 (OPT) and 256 (Llama-2) are used.\\n__________________________\\n*\\\"The Massive Activation paper[3] demonstrated significant performance degradation when clipping massive activations from activation distributions ... Can the proposed CVXQ method be extended to apply to activation distribution?*\\\"\\n\\nCVXQ already considers activation distribution. CVXQ uses the mean square magnitude of the weight gradient to inform bit depth assignment. By expressing weight gradient as the outer product of the input to the weights and the gradient of weight's output, we see that input magnitudes are indeed considered in the form of (mean square) gradient magnitude.\\n___________________________\\n*\\\"The quantization process described in the paper suggests that the time required for quantization might exponentially increase with the number of iterations, as shown in Figure 5.*\\\"\\n\\n__Not true.__ The number of iterations is kept the same across model sizes and the time each iteration takes is roughly linear (slightly super-linear) in the number of model parameters. CVXQ takes 47m to quantize the Llama-2-7B model, and 12h for the Llama-2-70B one. The revised manuscript now states these timings.\\n___________________________\\n*\\\"How does the size of the calibration set affect the performance of CVXQ?*\\\"\\n\\nFor OPT-1.3B and OPT-13B models, we experimented using 1024 calibration samples instead of 128 and the resulting perplexities on C4 were within \\u00b10.01 of those based on 128 samples. This is consistent with the variance observed from choosing a different set of 128 calibration examples (ln 530).\"}", "{\"comment\": \"Apologies in advance if my tone seems direct; it's meant purely for clarity.\\n________________\\n*\\\"An ablation study is needed.\\\"*\\n\\nRevised manuscript now includes an ablation in Table 2 (d). We start with RTN (min-max), to which we add MSE, add mixed precision, and finally add companding to arrive at the proposed approach. At 4 bits, mixed precision bit depth assignment is the dominant component. At 3 bits, both mixed precision and companding play important roles. The work of [1] Jeon et al. is now cited on ln 425. \\n________________\\n*\\\"Separating the processes of bit-precision allocation and the quantization ...\\\"*\\n\\nIndeed, Algorithm 1 should be called \\\"git depth determination\\\". Once the bit depths are obtained (along with the scale factors), the actual quantization is simply scaling the weights using the scale factor, and uniformly quantizing over [0, 1].\\n________________\\n*\\\"The proposed methodology is reasonable but lacks comparative analysis ... Testing on a wider range of models and benchmarks.\\\"*\", \"the_revised_manuscript_includes_more_benchmarks\": \"C4 (complexity), Arc Challenge, Arc Easy, HellaSwag, PIQA and Winogrande. We already test on 11 models of various sizes (125M \\u2013 70B) that our baseline methods also test on. Many baseline methods (GPTQ, QuIP, etc) do not not have official code available for non-OPT / non-Llama models.\\n________________\\n*\\\"The paper claims that the proposed algorithm completes the quantization quickly ... discussion on this aspect.\\\"*\\n\\nOnce the optimal bit depths have been determined, we simply perform round-to-nearest quantization of weights scaled to the range [0, 1]. The revised manuscript now clarifies this.\"}", "{\"comment\": \"Apologies in advance if my tone seems direct; it's meant purely for clarity.\\n_________________\\n*1. \\\"The main concern ... the authors permit each weight to have a different bit depth assignment, ...\\\"*\\n\\n__Not true.__ We use a grouping/clustering idea similar to GPTQ and AWQ (Frantar et al., 2022; Lin et al., 2024). Ln 356 says that we assign a single bit depth to a group of 512 weights (OPT) or 256 weights (Llama-2). Table 2(c) also shows that a group size of 512 performs better than 64, 128, or 256 for our quantized OPT models. For added clarity, we add to the caption of Figure 4: ... Clustering with a cluster size of 2 illustrated only for clarity.\\n_________________\\n*2. \\\"... AWQ employs dedicated kernels and uniformly quantizes all weights to 4 bits, aligning with the availability of a 4-bit engine. However, the manuscript lacks discussion on hardware acceleration or performance degradation.\\\"*\\n\\n__Not true.__ GPTQ and AWQ (Frantar et al., 2022; Lin et al., 2024) and their engines do not perform arithmetic in 4 or 3 bits. These methods dequantize 4- or 3-bit weights back into float16 on the fly so that weight-activation multiplication can performed in float16. We dequantize mixed precision weights (some of which are 3 bits, some 4 bits, some 8 bits, etc.) back into float16 in exactly the same way. \\n__________________\\n*3. \\\"... clarify how different bit-depth assignments would affect matrix multiplication kernels as batch size increases, as this could have a significant impact on performance.\\\"*\\n\\nIncreasing the batch size does not change inference, as weights are always dequantized back to float16 as a first step and activations are always kept in float16. Indeed, this is the approach used also used by (Frantar et al., 2022; Lin et al., 2024; Lee et al., 2024).\\n__________________\\n*4. \\\"the lack of considerations for hardware acceleration\\\"*\\n\\n__Not true.__ Our method leads to hardware acceleration in exactly the same manner as GPTQ, AWQ, and OWQ. For example, our kernel (see (the supplementary zip file) provides a 3.8x speed up for matrix vector multiplication on the 12288 x 49152 weight matrix (dense layer) of OPT-175B at 3 bits per weight on average, relative to FP16 matmul using cuBLAS (measured on A6000).\\n__________________\\n*5. \\\"Use of configurations, such as varying bit depths, that seem impractical and create unfair comparisons with prior work ... the need for a reevaluation of experimental results, given that the proposed quantization schemes operate under fundamentally different assumptions\\\"*\\n\\n__Not true.__ The methods you mention (GPTQ, AWQ and OWQ) do __not__ perform arithmetic directly in 3 or 4 bit weights and activations. They assume weights will be de-quantized back to float16 as needed for weight\\u2013activation multiplications in float16. This is the same assumption that our method is based on. Our method has the additional flexibility of assigning different bit depths (8, 7, 6, 5, 4, 3, 2, 1, or even 0 bits) to different groups of 512 weights to maximize the accuracy of the quantized model while maintaining the same de-quantization complexity (since everything is scalar quantized).\"}", "{\"comment\": \"*\\\"Unfortunately, it appears that the authors have not implemented dequantization kernels in practice.\\\"*\\n\\n__Not true.__ Our dequantization kernel is in in the new supplementary zip file. For e.g. the 12288 x 49152 weight matrix (dense layer) of OPT-175B at 3 bits per weight on average, our kernel provides a 3.8x speed up over FP16 matrix-vector multiplication using cuBLAS matmul (on A6000). This is also stated in the revised manuscript.\"}", "{\"comment\": \"Thank you. Slim-LLM (https://openreview.net/forum?id=tjlTczcnPz) is a notable mixed precision method (which extends GPTQ). A potential drawback of implementing mixed precision (or any extension) on top of GPTQ is that the resulting method inherits the need to perform Cholesky decomposition during quantization, precluding the method from being applied to activation quantization (as Cholesky decomposition would need to be applied to activations at inference time). Unlike Slim-LLM, our method is a mixed-precision RTN scheme, side-stepping GPTQ's Cholesky decomposition altogether. We now make this distinction from Slim-LLM more clear in the revised manuscript. Since Slim-LLM is contemporaneous work, we were not required to compare our work against it (see https://iclr.cc/Conferences/2025/FAQ).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes a method called CVXQ for mixed precision weight-only quantization of large language models (LLMs) using convex optimization techniques. CVXQ allows for user-specific quantization bit depths by defining the average bit depth and then seeking to minimize quantization error within this constraint. The method introduces row-wise and column-wise clustering to achieve this goal, where each cluster can be assigned different bit depths. To assign these bit depths, the problem is formulated in a Lagrangian form and solved using convex optimization. The effectiveness of CVXQ is demonstrated by achieving superior performance on the WikiText perplexity (PPL) metric compared to methods such as GPTQ, AWQ, and OWQ across various sizes of OPT models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Demonstrates that companded quantization can reduce the mean square error of weights before and after quantization more effectively than uniform quantization.\", \"Introduces a novel approach to weight-only quantization by employing various partitioning methods, specifically row and column clustering.\", \"Proposes a method to minimize the degradation in performance due to quantization within a constrained average bit depth by finding the optimal bit assignment combination. This is achieved by defining the quantization objective function in a Lagrangian form and solving it using convex optimization.\", \"Shows that the proposed partitioning methods can result in greater bit depth savings compared to non-partitioned methods.\"], \"weaknesses\": [\"Lacks comparison with existing LLM quantization methods such as FlexRound[1] and QuIP[2].\", \"Primarily evaluates LLM performance using perplexity, with insufficient comparison across other metrics like MMLU and AlpacaEval.\", \"Insufficient discussion and comparative analysis on how the proposed CVXQ method can be accelerated on existing hardware such as GPUs. One of the key goals of compression methods like quantization is to achieve actual acceleration. Although the paper mentions that this will be addressed in Part 2, it is crucial to include a discussion on how to accelerate the proposed quantization format.\", \"Tables 1 and 2 lack information on the average bit depth achieved by CVXQ. Since the proposed method assigns bit depths through a convex optimization process, it may not exactly match the user-specific quantization bit depth, leading to potentially different compression rates in practice.\", \"The quantization process described in the paper suggests that the time required for quantization might exponentially increase with the number of iterations, as shown in Figure 5.\", \"[1] FlexRound: Learnable Rounding by Element-wise Division for Post-Training Quantization, https://openreview.net/forum?id=-tYCaP0phY_\", \"[2] QuIP: 2-Bit Quantization of Large Language Models With Guarantees, https://arxiv.org/abs/2307.13304\"], \"questions\": [\"What do the terms \\\"row\\\" and \\\"column\\\" mean in the context of row and column partitioning in Figure 3?\", \"What units were used for clustering in Tables 1 and 2?\", \"The Massive Activation paper[3] demonstrated significant performance degradation when clipping massive activations from activation distributions. Papers like LLM.int8, SmoothQuant, and AWQ have shown the importance of considering activation distributions to mitigate the impact of outliers. Can the proposed CVXQ method be extended to apply to activation distribution?\", \"How does the size of the calibration set affect the performance of CVXQ?\", \"[3] Massive Activations in Large Language Models, https://arxiv.org/abs/2402.17762\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your effort to address my concerns; however, my main concern as below still remains unresolved. It seems essential to address this issue in order for the work to be considered for publication\\n\\n\\\"The paper introduces a mixed-precision approach, but comparisons are primarily made with uniform-precision quantization methods. A broader survey and comparison with other mixed-precision methods, addressing their strengths and weaknesses, would provide a stronger context for evaluating the proposed method.\\\"\"}", "{\"comment\": \"*\\\"... main concern about the comparison with state-of-the-art quantization methods remains unaddressed. \\\"*\", \"you_proposed_two_methods_for_us_to_compare_against\": \"FlexRound [1] and QuIP [2]. FlexRound's GitHub URL in the paper (https://github.com/onliwad101/FlexRound_LRQ) is non-functional. As an alternative, we compared with QuIP as instructed, though only partial results are available (public QuIP code is broken).\\n\\nNote that AWQ and OWQ are also considered state-of-the-art, and their results on QA are in the manuscript.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"> The revised manuscript includes QuIP results, as well as 6 more metrics (Scores on Arc Easy, Arc Challenge, HellaSwag, PIQA, Winogrande, C4 perplexity). This is in addition to WikiText2 and GSM8K. We believe this is sufficient validation from multiple angles, and is in line with the amount / kind of validation performed by others.\\n\\nThank you for your response. I have reviewed the comparison results for QuIP. Could you please provide the Common Sense Reasoning (CSR) or MMLU results for the comparison between the proposed method and other state-of-the-art baselines such as QuIP?\"}", "{\"comment\": \"*\\\"What are the specific hardware choices considered for the kernel design?\\\"*\\n\\nLn 635 (revised manuscript) says that we measured the 3.8x matmul speed-up on a A6000. Note Our submission focuses primarily on convex optimiazation for model quantization, not kernel design. While expected performance across other hardware options, while nice to discuss, is not essential to the mathematical framework being presented here.\\n_______________\\n*\\\"I recommend referring to other documents to see how kernel performance is typically demonstrated.*\\\"\\n\\nWhat \\\"documents\\\" are you referring to? References? For example, the baseline methods AWQ (AAAI), QuIP (NeurIPS) report kernel performance similarly on a single GPU model. Only AWQ (MLSys) explores more hardware options because because MLSys emphasizes hardware aspects of ML. Each venue has different expectations on what a model quantization paper should entail. In this ICLR submission, we focus more on the mathematical / rate\\u2013distortion-theoretic aspects of model quantization.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thank you for the authors' response. However, I still feel that my questions have not been fully addressed. Specifically, if the kernel structure shows similar acceleration at the same compression ratio in both GPTQ and AWQ, the advantage of the proposed paper should be demonstrated through a better trade-off point between compression ratio and accuracy. However, I do not understand how the proposed method has been reasonably compared with existing methods such as FlexRound or QuIP. Additionally, I am skeptical that a lower perplexity score, such as Table 1 and 2, necessarily indicates better quality in LLMs. Therefore, I thinik the authors need to conduct a more rigorous comparison in terms of accuracy. For these reasons, I have decided to maintain my original score.\"}", "{\"summary\": \"This paper tackles the critical issue of large language model (LLM) compression, proposing a novel quantization technique, CVXQ, viewed from a convex optimization perspective. CVXQ, scalable to models with hundreds of billions of weight parameters, allows users to compress models to any specified size after training\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a comprehensive quantization method for applying different bit allocation to groups within a large language model (LLM) matrix.\", \"weaknesses\": [\"The paper's contribution isn't distinct. Although it proposes treating dynamic bit allocation as a convex optimization problem, this approach faces several issues:\", \"Mixed-precision quantization is a well-researched field; the paper should highlight how its method differs from existing techniques and why it chose to compare solely with LLM quantization methods.\", \"Group quantization is not a new concept but a long-standing basic strategy in the quantization field.\", \"The convex optimization formulation proposed seems flawed. For instance, in equation three, f(X) is not convex, which questions the validity of the entire problem.\", \"The writing needs improvement. The definition of \\\"part-1\\\" in the title is unclear, and many descriptions in the text are ambiguous.\", \"The utility of mixed precision within a matrix is unclear. This approach would require complex, specific hardware design, limiting its broad application. Most mixed-precision quantizations occur between layers, not within a matrix.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"*\\\"What is the biggest gain through modeling the quantization problem as a convex problem\\\"*\\n\\nHigher quantized model accuracy is not the focus of this paper (even if it is one of our findings). Our work shows that using mixed precision bit depths (determined using optimization) and with the classic round-to-nearest quantization can outperform more complex GPTQ-like methods. Our low complexity quantization operations (scaling and rounding) can even facilitate activation quantization at inference time. (Ln 59\\u201362)\\n\\nQuantizing a matrix of activations (e.g., during batched inference) using GPTQ-like methods can take tens of seconds, introducing significant delays to inference. Essentially, these methods are procedural, requiring the same computational effort for subsequent quantizations of similar activation matrices or even the same weight matrix (if there is a need to quantize the same weight matrix again the second time). (LN 45\\u201347). This contrasts with our method where only scaling and rounding is needed for the actual quantization once the optimal bit depths have been obtained.\\n\\nIn this work, we lay the theoretical groundwork for determining the optimal bit depths using optimization and demonstrate the effectiveness of this approach in the context of weight-only quantization, where the bulk of earlier work lies.\"}", "{\"comment\": \"*\\\"From running QuIP and other methods, I've observed cases where lower perplexity scores resulted in significant degradation in MMLU performance.\\\"*\\n\\nIs your finding about MMLU vs perplexity published anywhere? Is it citable? None of our baseline methods report on MMLU. It is not good scholarly practice to make an anecdotal statement especially during a review.\\n\\nThe revised manuscript includes QuIP results, as well as 6 more metrics (Scores on Arc Easy, Arc Challenge, HellaSwag, PIQA, Winogrande, C4 perplexity). This is in addition to WikiText2 and GSM8K. We believe this is sufficient validation from multiple angles, and is in line with the amount / kind of validation performed by others.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thanks for the author's response. I have carefully reviewed the author's response, but my main concern about the comparison with state-of-the-art quantization methods remains unaddressed. Therefore, I will keep my score.\"}" ] }
0T49QbSOho
Regret-Optimal List Replicable Bandit Learning: Matching Upper and Lower Bounds
[ "Michael Chen", "A. Pavan", "N. V. Vinodchandran", "Ruosong Wang", "Lin Yang" ]
This paper investigates *list replicability* [Dixon et al., 2023] in the context of multi-armed (also linear) bandits (MAB). We define an algorithm $A$ for MAB to be $(\ell,\delta)$-list replicable if with probability at least $1-\delta$, $A$ has at most $\ell$ traces in independent executions even with different random bits, where a trace means sequence of arms played during an execution. For $k$-armed bandits, although the total number of traces can be $\Omega(k^T)$ for a time horizon $T$, we present several surprising upper bounds that either independent of or logarithmic of $T$: (1) a $(2^{k},\delta)$-list replicable algorithm with near-optimal regret, $\widetilde{O}({\sqrt{kT}})$, (2) a $(O(k/\delta),\delta)$-list replicable algorithm with regret $\widetilde{O}\left(\frac{k}{\delta}\sqrt{kT}\right)$, (3) a $((k+1)^{B-1}, \delta)$-list replicable algorithm with regret $\widetilde{O}(k^{\frac{3}{2}}T^{{\frac{1}{2}}+2^{-(B+1)}})$ for any integer $B>1$. On the other hand, for the sublinear regret regime, we establish a matching lower bound on the list complexity (parameter $\ell$). We prove that there is no $(k-1,\delta)$-list replicable algorithm with $o(T)$-regret. This is optimal in list complexity in the sub-linear regret regime as there is a $(k, 0)$-list replicable algorithm with $O(T^{2/3})$-regret. We further show that for linear bandits with $d$-dimensional features, there is a $\widetilde{O}(d^2T^{1/2+2^{-(B+1)}})$-regret algorithm with $((2d+1)^{B-1},\delta)$-list replicability, for $B>1$, even when the number of possible arms can be infinite.
[ "Replicability", "Regret Bound", "Bandit" ]
Accept (Poster)
https://openreview.net/pdf?id=0T49QbSOho
https://openreview.net/forum?id=0T49QbSOho
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z7lPJT9sED", "ytTMCEu5Tv", "vmozr5vSjt", "tzB4s1azOW", "rxYhY8GDrV", "pAVLSdz8gD", "mQtO0dbM06", "hh6Ad3gCLM", "edMa2CY3RP", "dpzdofgxE9", "W3lekSpGde", "MNGJXuCYOk", "HA5leaoK96", "9HvkqcEH1k", "84fiOLaadM", "82fZJ8eJi7", "6TGruPXbuL" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732956513679, 1732660401045, 1732516764615, 1730278896825, 1730712692486, 1732382356855, 1737524061383, 1732305572947, 1730646781396, 1732648487153, 1732305903212, 1732507426747, 1730481698018, 1732306253098, 1732652988501, 1734942084021, 1732306440922 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10553/Reviewer_ksHy" ], [ "ICLR.cc/2025/Conference/Submission10553/Authors" ], [ "ICLR.cc/2025/Conference/Submission10553/Reviewer_m3DX" ], [ "ICLR.cc/2025/Conference/Submission10553/Reviewer_m3DX" ], [ "ICLR.cc/2025/Conference/Submission10553/Reviewer_ujbd" ], [ "ICLR.cc/2025/Conference/Submission10553/Reviewer_jzZs" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10553/Authors" ], [ "ICLR.cc/2025/Conference/Submission10553/Reviewer_ksHy" ], [ "ICLR.cc/2025/Conference/Submission10553/Authors" ], [ "ICLR.cc/2025/Conference/Submission10553/Authors" ], [ "ICLR.cc/2025/Conference/Submission10553/Reviewer_ujbd" ], [ "ICLR.cc/2025/Conference/Submission10553/Reviewer_jzZs" ], [ "ICLR.cc/2025/Conference/Submission10553/Authors" ], [ "ICLR.cc/2025/Conference/Submission10553/Reviewer_ujbd" ], [ "ICLR.cc/2025/Conference/Submission10553/Area_Chair_pkeZ" ], [ "ICLR.cc/2025/Conference/Submission10553/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the response. I'll update my review.\"}", "{\"comment\": \"Thank you for your feedback and concerns. We respectfully disagree with your assessment regarding the regret bounds. The algorithms (1 and 2) proposed in our paper are nearly regret-optimal $\\\\tilde{O}(\\\\sqrt{T})$, as they share the same regret bound wrt $T$, which have been shown in the literature to be near-optimal. The change we made in the abstract clearly explains a contribution: sub-linear regret algorithms must have a list complexity of at least $k$. If the phrase \\\"regret-optimal\\\" in the title or elsewhere is misleading, we are open to suggestions for acceptable wording. We would like to point out that the statements of the theorems in the paper are precise and are not misleading.\"}", "{\"comment\": \"Thank you for your response. I will maintain my original rating.\"}", "{\"summary\": \"This paper introduces the concept of list replicability to the multi-armed bandit model, where the sequence of arm pulls must lie in a small finite list with high probability. The authors design and analyze three algorithms, each providing different levels of guarantees on list replicability and high-probability regret. Additionally, a nearly matching lower bound is proved for any algorithm with sub-linear regret. The paper also extends the study to linear bandits.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Although the paper is highly theoretical, it is well-presented and clearly conveys the key ideas behind the algorithm designs and proofs.\\n\\n2. Three algorithms with varying levels of guarantees are introduced, each with its own significance. Notably, the first algorithm achieves near-optimal cumulative regret, and the total number of possible traces is independent of T. The last algorithm is based on a subroutine from Dixon et al. (2023) and is nearly optimal, given the lower bound in Section 6.\\n\\n3. The theoretical contributions are nontrivial, and the analysis of the phase-elimination algorithm is novel, which should be of interest to the bandit community. It is also interesting that the lower bound is proven using the Sperner/KKM lemma, a combinatorial result in coloring.\", \"weaknesses\": \"The main criticism of the paper might lie in its motivation. In the introduction, it is suggested that list replicability might be beneficial for safety-critical applications, as one could be prepared for the action sequence being played. However, although the proposed algorithms can ensure a small number of traces with high probability, these possible traces cannot be known without exact knowledge of the problem instance. Therefore, outside of the theoretical domain, the practical application of list replicability seems limited.\", \"questions\": \"1. Could you compare $\\\\rho$-replicability and list replicability with respect to their potential practical applications, such as in clinical trials?\\n2. Why is $C$ referred to as the number of shifts? Do you mean the number of possible shift $r$?\\n3. Minor typos: Line 207: Theorem 2.1 -> Assumption 2.1; Line 210: lemma -> lemmas; Line 346: the of -> the number of.\\n4. Thomson sampling and UCB are two well-established algorithms in the bandit literature. Thomson sampling is randomized, making it tricky to provide strong list replicability guarantees. Could you discuss the potential challenges in adapting UCB? My intuition is that UCB might achieve good list replicability with appropriate early-stage modifications.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies list replicability in multi-armed bandits (MAB), defining an algorithm as list replicable if it limits the distinct arm sequences (traces) across independent executions with high probability. Further, this paper proposes three algorithms with different parameters of list replicability. Finally, this paper investigates a lower bound of bandits with list replicability.\\n\\n**---After rebuttal---**\\n\\nMy primary concern pertains to the main claims of the paper, as highlighted in the title: \\\"Regret-Optimal\\\" and \\\"Matching Upper and Lower Bounds.\\\" Following a detailed discussion with the authors, I have observed that the paper fails to provide any lower bound in terms of regret for their setting, even in Section 6. Consequently, the claims of \\\"Regret-Optimal\\\" and \\\"Matching Upper and Lower Bounds\\\" appear highly questionable.\\n\\nIn their latest response, the authors stated that their claim of \\\"regret-optimality\\\" is based on achieving a $\\\\tilde{O}(\\\\sqrt{T})$ regret. However, to the best of my knowledge in the field of bandits, it is not standard practice to assert optimality of regret solely with respect to the parameter $T$, while disregarding other critical parameters (e.g., $K$). Given this significant issue of overclaim, I am unable to recommend this paper for acceptance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The problem setting proposed is both novel and intriguing, characterized by a rigorously defined concept of bandit replicability in Definition 2.2.\\n2. The theoretical analysis provided is exhaustive, introducing three distinct algorithms tailored to various parameters of replicability.\", \"weaknesses\": \"1. Algorithms 1 and 2 exhibit considerable similarities. Could there be a method to consolidate these two algorithms into a unified framework?\\n\\n2. In Theorem 6.1, the designation \\\"lower bound\\\" appears misapplied as it does not seem to correspond to the lower bounds of any algorithms discussed previously. Notably, in Theorem 6.1 we have $l \\\\approx k$, whereas in prior algorithms $l \\\\gg k$ in most cases. In my humble opinion, a valid lower bound should be able to explain whether the proposed algorithms can be further optimized in general.\\nFurthermore, why the authors said \\\"We show that result (3) is nearly tight for B=2\\\" in the abstract. What's the hidden constant behind $\\\\Omega(B) $ in (3). Do you mean the regret of (3) is $O(T)$ for $B=2$?\\n\\n3. Would it be more accurate to describe what is currently referred to as \\\"lower bounds\\\" in Theorem 6.1 as \\\"impossibility results\\\"? I think Theorem 6.1 is quite trivial because any pair of traces should share more than two arms if the total number of traces is less than $K$.\\n\\n4. The absence of experimental validation in this paper is notable. Including even preliminary numerical simulations or toy experiments could significantly enhance the validity and impact of the proposed algorithms.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response. I have increased my rating.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for the review.\", \"addressing_weaknesses_and_questions\": \"1. The main difference between Algorithms 1 and 2 is that Algorithm 2 uses a random threshold to eliminate the arms and thus needs a better estimate compared to Algorithm 1. To do so, the batch length needs to be changed appropriately. As pointed out, Algorithms 1 and 2 can be unified, however, we believe that presenting them separately helps with the clarity of the analysis.\\n\\n2. Algorithm 1 gives a list size of $2^k$, Algorithm 2 gives a list size of $O(k/\\\\delta)$, and Algorithm 3 gives a list size of $k$ (with $B = 2$). A natural question is whether the list sizes can be further reduced. Our lower bound result (Theorem 6.1) states that the list size cannot be less than $k$, if the regret were to be sub-linear. We can restate the theorem as follows: \\\"Any algorithm with $o(T)$ regret must have a list complexity $\\\\geq k$\\\". On the other hand, we know there is a $o(T)$-regret algorithm with list complexity equals $k$ (Third algorithm with $B = 2$). On comment about, $\\\\Omega(B)$, this can be changed to $2^{-(B+1)}$ so that there are no hidden constants. This has been updated in the latest version. When the regret is less $o(T^{2/3})$, our algorithms have a list complexity asymptotically larger than $k$. It is an open question whether we can design algorithms with smaller list complexity. This is stated as an open question in the conclusions.\\n\\n3. We do not see how \\\"sharing an arm across two different traces\\\" yields a lower bound on the list complexity. In fact, any reasonable algorithm must play all the arms with a very high probability; otherwise, the missing arm could have the highest mean and suffer a regret of $O(T)$. So in any algorithm with sublinear regret all arms are shared across (almost) all traces. We do not see a trivial way of arguing the lower bound. The fact we are looking for algorithms with sublinear regret is crucial as it is trivial to design algorithms with linear regret with just one trace. \\n\\n4. Thank you for the suggestion. This work focuses on the theoretical foundations, the main contribution is to define and introduce the notion of list replicability in MAB and explore possibilities and impossibilities.\"}", "{\"summary\": \"This paper studies list replicability in multi-armed bandits and linear bandits. It comes up with the notion of $(\\\\ell, \\\\delta)$-list replicability, and proved various trade-off between replicability and regret dependency on number of arms and on time horizon. Furthermore, the paper extends the results to linear bandits setting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a definition of reproducibility in bandits problems.\\n2. The paper proves tight trade-off between replicability and regret dependency on $T$. \\n3. The proof to the lower bound is quite insightful.\", \"weaknesses\": \"1. The algorithms are generally based on successive elimination, so it contains less insight on more widely used bandits algorithms like UCB.\\n2. The proofs to the upper bounds are quite simple and lack enough novelty given their similarity to successive elimination.\", \"questions\": \"1. Line 18, $\\\\widetilde O \\\\sqrt{kT}$ missing parentheses.\\n2. The notion of $O(\\\\cdot)$ and $\\\\Omega(\\\\cdot)$ was a little abused. The paper contains regret bound like $\\\\widetilde O (k^{\\\\frac32} T^{\\\\frac12 + 2^{-\\\\Omega(B)}})$. Here, it's inappropriate to use $\\\\Omega(\\\\cdot)$ in $\\\\widetilde O(T^{2^{-\\\\Omega(B)}})$, because the constant before $B$ cannot be ignored, e.g., $T^{2^{-B}}$ and $T^{2^{-2B}}$ have very different order.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the suggestion. We made changes in the abstract to accommodate your suggestion regarding point 2. We hope it is more clear and less confusing now. Accordingly, we have updated the remark in lines 453-459.\\n\\nRegarding experimentation in point 4, we have conducted preliminary experimentation. We compared Algorithm 1 with the UCB algorithm. with $k=2,3,4,5$ arms. We performed 100 independent runs with a horizon of $T=10^6$, and the distributions of the $k$ arms were chosen to be normal distributions with means equispaced between $[0,1]$ each with standard deviation 1. For each algorithm and $k$, we keep track of the unique traces encountered over the course of the 100 independent runs. We adjusted the hyperparameters so that UCB and Algorithm 1 have similar regrets ($c=0.04$ for Alg 1 and $c=1$ for UCB). Then, we compare the list complexities. Here are the results:\\n\\n|Algorithm Type\\t\\t|\\tNumber of Arms ($k$)|\\t\\tNumber of traces observered \\t\\t|\\tAvg. Expected Regret($\\\\times10^{-4}$)\\t|\\n|-------------------|:-----------------:|:-----------------------------------------:|:-----------------------------------------:|\\n|Algorthim 1\\t | 2 | \\t\\t\\t2 \\t\\t\\t\\t\\t| \\t\\t\\t\\t\\t0.11\\t\\t\\t\\t\\t|\\n|Algorthim 1\\t | 3 | \\t\\t\\t3 \\t\\t\\t\\t\\t| \\t\\t\\t\\t\\t0.99\\t\\t\\t\\t\\t|\\n|Algorthim 1\\t | 4 | \\t\\t\\t4 \\t\\t\\t\\t\\t| \\t\\t\\t\\t\\t1.40\\t\\t\\t\\t\\t|\\n|Algorthim 1\\t | 5 | \\t\\t\\t5 \\t\\t\\t\\t\\t| \\t\\t\\t\\t\\t1.67\\t\\t\\t\\t\\t|\\n|UCB\\t\\t\\t | 2 | \\t\\t\\t100 \\t\\t\\t\\t\\t| \\t\\t\\t\\t\\t0.27\\t\\t\\t\\t\\t|\\n|UCB\\t\\t\\t | 3 | \\t\\t\\t100 \\t\\t\\t\\t\\t| \\t\\t\\t\\t\\t0.61\\t\\t\\t\\t\\t|\\n|UCB\\t\\t\\t | 4 | \\t\\t\\t100 \\t\\t\\t\\t\\t| \\t\\t\\t\\t\\t1.00\\t\\t\\t\\t\\t|\\n|UCB\\t\\t\\t | 5 | \\t\\t\\t100 \\t\\t\\t\\t\\t| \\t\\t\\t\\t\\t1.41\\t\\t\\t\\t\\t|\\n\\nUCB encounters a new trace every single run. Whereas Algorithm 1 seems to encounter traces proportional to $k$ when the distributions are equispaced between $[0,1]$.\"}", "{\"comment\": \"Thank you for the review.\", \"response_to_weaknesses\": \"1. For the purposes of list replicability, phase elimination algorithms have the desirable property that if hyperparameters are picked correctly then the arms are deleted in one of two consecutive rounds. However, the list complexity of UCB-based algorithms is exponentially dependent on $T$. Consider the case when $k=2$ and both arms have identical distributions. Let $UCB_1$ and $UCB_2$ represent the UCB estimates of arm 1 and arm 2, respectively. Let us assume that at some time $t$, $UCB_1<UCB_2$. UCB algorithm dictates that we shall play arm 2 till the estimate of arm 1 is larger. However, because the samples are random, the time at which we start playing arm 1 is probabilistic. For each time step where a switch is possible results in a new trace. This trait makes the list complexity of UCB-based algorithms exponentially dependent on $T$. Note that the list complexities of our algorithms are independent of $T$.\\n\\n2. The first algorithm achieves optimal regret, and we bound the list size in a way that is independent of $T$. Similarly, the second algorithm achieves optimal regrets when $k$ and $\\\\delta$ are constant. \\nUCB algorithm, though achieving optimal regret, has high list complexity. We do not know if the UCB algorithm can be modified to get a small list size while achieving optimal regret. The base phase elimination algorithm itself does not have $2^k$ list replicability. Only when the parameters are correctly chosen do we get a list size of $2^k$. As pointed out by the reviewer m3DX, our theoretical contribution is a novel analysis of the phase elimination algorithm (aided by careful choice of parameters) to exhibit a list size of $2^k$. We then show how we can reduce this to $O(k/\\\\delta)$ by applying a novel technique of shifting the deletion criteria. All contributions are theoretical in nature and are novel as they help set foundations for studying list replicability in more general reinforcement learning settings.\", \"response_to_questions\": \"1. Thank you for the correction. It has been corrected in the updated version.\\n2. There is no attached constant multiplicative constant to $B$. So regret is simply $\\\\tilde{O}(k^{3/2}T^{1/2+2^{-(B+1)}})$. The updated version fixes this.\"}", "{\"comment\": \"Thank you for your detailed response. Points 1 and 3 are now clear.\", \"regarding_point_2\": \"I understand the statement that \\\"any algorithm with regret o(T)\\nmust have a list complexity >=k.\\u201d However, your claim that \\\"it almost exactly matches the k-list replicable upper bound for B=2\\\" is somewhat unclear and potentially misleading. If I understand correctly, the tightness of your result pertains primarily to the value of the first parameter of your (xx,xx)-list replicability in some cases, rather than to the regret itself. In the context of bandits, when we discuss upper bounds matching lower bounds, we typically refer to regret. Similarly, in Section 6 of your paper, there does not appear to be a lower bound in terms of regret.\", \"regarding_point_4\": \"While I recognize that the primary contribution of your work lies in its theoretical contribution, it is crucial to verify its availability even with a toy simulation, as many other theoretical papers did.\"}", "{\"summary\": \"The paper introduces replicability to the multi-armed bandit area through the concept of list replicability and proposes algorithms for both k-armed and linear bandits. Notably, for k-armed bandits, the authors provide a lower bound demonstrating that one proposed algorithm is nearly optimal.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and structured, with a clear motivation. Tho short, it presents a comprehensive set of results for both k-armed and linear bandits, though the linear bandit results appear to be preliminary.\", \"weaknesses\": [\"It would be helpful to clarify which variables the hidden logarithmic factors depend on, and whether these factors are consistent throughout the paper.\", \"No experiments are presented.\"], \"questions\": [\"While it seems that replicability papers often omit experiments, bandit experiments are generally straightforward to conduct. Did the authors consider demonstrating some experimental results?\", \"Most of the algorithms appear to be adaptations of standard elimination-based bandit algorithms for both k-armed and linear bandit problems. It would be valuable if the authors could reference each classical elimination algorithm and include a side-by-side comparison showing what aspects of these algorithms break replicability and how the new modifications enable it.\", \"Given that the study addresses regret minimization\\u2014typically dominated by UCB-type algorithms for stronger instance guarantees\\u2014the authors\\u2019 choice of elimination-based algorithms is interesting. Could you clarify the rationale behind this choice?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the review.\", \"addressing_weaknesses\": \"1. Thank you for the suggestion. $\\\\tilde{O}(\\\\cdot)$ hides $\\\\log k$, $\\\\log\\\\log T$, $\\\\log 1/\\\\delta$ factors. This has been clarified in the updated version. These factors are consistent throughout the entire paper.\\n2. This work focuses on theoretical foundations, the main contribution is to define and introduce the notion of list replicability in MAB and explore possibilities and impossibilities.\", \"addressing_questions\": \"1. The focus of this work is to theoretically formalize list replicability in the context of MAB and design list replicable algorithms and prove impossibility results.\\n2. Known phase elimination algorithms work by estimation algorithms run in batches, and during each batch they estimate the means of arms and eliminate the arms based on certain criteria. These algorithms as presented have a list size around $O(\\\\ell^B)$, where $\\\\ell$ depends on the list complexity of the estimators, this itself could be very large. So new algorithms and analysis are needed. \\nIn Algorithm 1, the modification is the choice of hyperparameters and a novel analysis. In algorithm 2, the main modification is the introduction of random shifts which brings new challenges in the analysis. The third algorithm uses list-replicable estimators. We will add this discussion as a remark.\\n\\n3. While UCB algorithms have several desirable properties, they seem to be ill-suited to ensure replicability. In particular, the list complexity of UCB-based algorithms seems to be exponentially dependent on $T$. Consider the case when $k=2$ and both arms have identical distributions. Let $UCB_1$ and $UCB_2$ represent the UCB estimates of arm 1 and arm 2, respectively. Let us assume that at some time t, $UCB_1<UCB_2$. UCB algorithm dictates that we shall play arm 2 till the estimate of arm 1 is larger. However, because the samples are random, the time at which we start playing arm 1 is probabilistic. For each time step where a switch is possible results in a new trace. This trait makes the list complexity of UCB-based algorithms exponentially dependent on $T$. Note that the list complexities of our algorithms are independent of $T$. This discussion has been added to the updated version.\"}", "{\"comment\": \"**Regarding the experiment**: I appreciate the authors' efforts in conducting the experiment. Indeed, this is more of a \\\"preliminary\\\" experiment. For instance, using UCB as a baseline does not seem like a good choice since it's nearly non-replicable. Additionally, the results for $K=2$ are puzzling, suggesting that UCB performs even worse, which needs further explanation. Despite these issues, based on what the authors have done so far, I believe they can deliver a complete version of their simulation in the final paper. I am confident about it.\\n\\n**Regarding the claim in their abstract**: I am not persuaded that merely adding lines 453-459 and changing a few sentences in the abstract make it \\\"more clear and less confusing.\\\" As I previously mentioned, there's no actual lower bound in terms of regret. Therefore, any claims about the tightness of regret should be revised. This is particularly crucial in the title. I believe the claim of \\\"regret-optimal\\\" is not true.\\n\\nMoreover, if the primary claim, as stated in the title \\\"Regret-Optimal List Replicable Bandit Learning: Matching Upper and Lower Bounds,\\\" proves to be inaccurate, does this paper necessitate a new round of review for the next conference? I have a strong concern about it.\\n\\nI am decreasing the score to 3, mainly based on what I said in \\\"regarding the claim in their abstract\\\" above. I remain open to adjusting the score as necessary.\"}", "{\"metareview\": \"This is a borderline paper on a current topic of interest: replicability of experiments considered within the bandit context. Some of the criticisms given by the reviewers concerns the claim of matching upper and lower bounds, (the matching parts refers only to the dependence on T), and that classical algorithms like UCB are not covered by the replicability definition. Despite these criticisms the paper provides new ideas that might turn out to be valuable to the bandit community.\", \"additional_comments_on_reviewer_discussion\": \"There was a significant exchange between reviewers and authors. It did not alter the scores however.\"}", "{\"comment\": \"Thank you for the review.\\n\\nResponse to Weaknesses\\n\\nFormalizing replicability is an emerging research direction and this paper introduces and explores one viable notion. The practical applications of all proposed replicability notions (including ours) need to be investigated thoroughly in collaboration with domain experts. \\n\\nResponse to Questions\\n1. $\\\\rho$-replicability requires a *good seed* to be shared among the independent runs to achieve replicability. However, there is no pragmatic way of checking if a seed is good or not. List replicability does not have such a restriction. So long as samples are being drawn for the same distribution list replicability is guaranteed.\\n2. $C$ is the total number of shifts, $r$ is uniformly sampled from $[C]$. We have $C=12k/\\\\delta$ and setting $\\\\delta=\\\\frac{1}{2}$ (say) yields in $C=24k$ total possible shifts.\\n3. Thank you for pointing out the typos. They have been corrected in the updated version.\\n4. While UCB algorithms have several desirable properties, they seem to be ill-suited to ensure replicability. In particular, the list complexity of UCB-based algorithms seems to be exponentially dependent on $T$. Consider the case when $k=2$ and both arms have identical distributions. Let $UCB_1$ and $UCB_2$ represent the UCB estimates of arm 1 and arm 2, respectively. Let us assume that at some time $t$, $UCB_1<UCB_2$. UCB algorithm dictates that we shall play arm 2 till the estimate of arm 1 is larger. However, because the samples are random, the time at which we start playing arm 1 is probabilistic. For each time step where a switch is possible results in a new trace. This trait makes the list complexity of UCB-based algorithms exponentially dependent on $T$. Note that the list complexities of our algorithms are independent of $T$. This discussion has been added to the updated version. It is unclear how an early modification could yield low list replicability when $T$ is large and when distributions are similar.\"}" ] }
0SpkBUPjL3
Unremovable Watermarks for Open-Source Language Models
[ "Miranda Christ", "Sam Gunn", "Tal Malkin", "Mariana Raykova" ]
The recent explosion of high-quality language models has necessitated new methods for identifying AI-generated text. Watermarking is a leading solution and could prove to be an essential tool in the age of generative AI. Existing approaches embed watermarks at inference and crucially rely on the large language model (LLM) specification and parameters being secret, which makes them inapplicable to the open-source setting. In this work, we introduce the first watermarking scheme for open-source LLMs. Our scheme works by modifying the parameters of the model, but the watermark can be detected from just the outputs of the model. Perhaps surprisingly, we prove that our watermarks are $\textit{unremovable}$ under certain assumptions about the adversary's knowledge. To demonstrate the behavior of our construction under concrete parameter instantiations, we present experimental results with OPT-6.7B and OPT-1.3B. We demonstrate robustness to both token substitution and perturbation of the model parameters. We find that the stronger of these attacks, the model-perturbation attack, requires deteriorating the quality score to 0 out of 100 in order to bring the detection rate down to 50%.
[ "watermark", "large language model" ]
Reject
https://openreview.net/pdf?id=0SpkBUPjL3
https://openreview.net/forum?id=0SpkBUPjL3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wb6OrwRqqI", "qMaSLZyDPq", "qKuggLh9k0", "pr0dZRueJu", "kM8x6iEtlB", "XFayoxReJJ", "RDh7Zd0nRQ", "Qrw5BJ9Ne8", "IU9Uvi49t8", "H9YJE7fElY", "Gk7oCJJEn0", "3qybZESEKN" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "decision", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732631149834, 1730095605246, 1730204153582, 1732505935983, 1737523994402, 1734776358290, 1732546789127, 1730726858631, 1732507063026, 1732506532815, 1732505655452, 1729416563995 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9607/Reviewer_p4oF" ], [ "ICLR.cc/2025/Conference/Submission9607/Reviewer_p6Eu" ], [ "ICLR.cc/2025/Conference/Submission9607/Reviewer_Pxpj" ], [ "ICLR.cc/2025/Conference/Submission9607/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9607/Area_Chair_4iP7" ], [ "ICLR.cc/2025/Conference/Submission9607/Reviewer_p6Eu" ], [ "ICLR.cc/2025/Conference/Submission9607/Reviewer_8Mtu" ], [ "ICLR.cc/2025/Conference/Submission9607/Authors" ], [ "ICLR.cc/2025/Conference/Submission9607/Authors" ], [ "ICLR.cc/2025/Conference/Submission9607/Authors" ], [ "ICLR.cc/2025/Conference/Submission9607/Reviewer_p4oF" ] ], "structured_content_str": [ "{\"title\": \"Re:\", \"comment\": \"Thanks for the clarifications.\", \"re_assumptions\": \"I am not claiming them to be unreasonable; I just cannot get a good sense of them by reading the (sometimes minimal) discussions. E.g., I still don't understand Assumption 1's discussions, and I don't understand how the restrictions on C in Theorem 1 should be interpreted. you mentioned that this is discussed in Section 5.2 before Theorem 1, but Theorem 1 is before Section 5.2. If you give a more accurate pointer, I will double check.\\n\\nAlso, I wanted to comment on something that you seem to be commenting on it yourself first. As I understand, un-removability is *not* a strictly stronger property than black-box robustness, right? Namely, if we prove a model to be non-black-box robust, it is not clear what kind of black-box robustness properties we can infer? That makes the formulation of un-removability a bit less appealing, but I have no suggestions on how to fix it, so I do not take it against the paper.\\n\\nAll in all, I find the problem if this paper very interesting, but the presentation makes it a bit hard to judge how much actual of a jump is made towards this new (interesting) direction.\\n\\nIn summary, I am a bit more positive about the paper now and increased my score.\"}", "{\"summary\": \"This paper introduces the first watermarking scheme for open-source LLMs. The idea is to embed a watermark directly in the model's parameters rather than through the sampling algorithm, making it resilient to tampering in open-source environments. The authors define \\\"unremovable watermarks\\\" for neural networks and implement a scheme that perturbs neuron biases in the model's final layer with Gaussian noise. Detection of the watermark is achieved either by examining the weights for specific bias patterns or by analyzing output text for token frequency markers. The watermark is shown to be unremovable, as attempts to erase it degrade model quality, with experimental results on OPT-6.7B and OPT-1.3B supporting this claim.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper introduces a new watermarking scheme to embed unremovable watermarks directly in model weights and resist tampering in open-source environments.\", \"The paper defines \\\"unremovable watermarks,\\\" providing proofs and an analysis of the watermark\\u2019s robustness against attacks and conducting experiments with OPT-6.7B and OPT-1.3B models to demonstrate the approach's effectiveness.\", \"The paper is well-structured, logically presenting its motivation, methodology, and findings, with clear definitions and algorithms. I highly commend the authors for the nice presentation.\"], \"weaknesses\": [\"Now, the authors claim to have introduced the first watermarking scheme for open-source LLMs. What do they mean by this? There are many watermarking schemes which could be deployed in open source LLMs, so this claim might not be right, as the proposed scheme can also be deployed in closed source LLMs by the model owners. Which leads to the next question. If the LLM is open source, what exactly is the benefit of watermarking when the attacker has direct access to model's weights. Can the authors expand more on their motivation?\", \"The proposed approach embeds watermark signals to the bias of the last layer's neurons. There is another approach by ByteDance that injects watermark into the LLM weights by finetuning (https://arxiv.org/pdf/2403.10553). Why is there no comparison with this approach? Infact, why is there no comparison with other watermarking schemes at all?\", \"There are adaptive ways to bypass watermarks. One is by using adaptive paraphrasers. If the proposed watermark scheme is unremovable, yet detectable, why are there no empirical results proving the 'unremovability' claim using adaptive paraphrasers, or even normal paraphrasers like Dipper, or even using open source LLMs for paraphrasing.\", \"How efficient is the detection process? How many tokens does it require to detect the proposed scheme, especially using its optimal hyperparameters? I feel the experiments the authors provided to prove the efficiency and strength of this approach are not enough.\"], \"questions\": \"Please answer those in weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a method for embedding watermarks in large language models (LLMs). This method incorporates watermark information by adding noise that follows a normal distribution to the model's output, with the noise serving as the watermark's key. The authors also demonstrate that, under certain assumptions, the embedded watermark is unremovable. The feasibility of the proposed scheme is validated using the OPT-6.7B and OPT-1.3B models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors attempt to embed noise information following a normal distribution as a watermark into the text output of the model. This is an interesting endeavor that could potentially aid future watermark embedding algorithms.\\n\\n2.The paper attempts to theoretically discuss the unremovability of watermarks, which is also an interesting analysis.\", \"weaknesses\": \"1. The paper's description of the algorithm is not clear enough and does not reflect the specific implementation of the watermark embedding algorithm. There is watermark detection for text in Algorithm 4, but there is no embedding algorithm for text.\\n\\n2. The paper discusses the unremovability of watermarks, which is generally referred to as robustness in other papers. The paper does not compare the robustness of its approach with those of other papers. It should also discuss the soundness property of the watermark, which typically contradicts robustness.\\n\\n3. The writing in the paper is not clear enough, which makes it difficult to understand the algorithms it contains. Specific issues will be provided below.\", \"questions\": \"1. What is the relationship between Algorithm 4 and Algorithm 3? If Algorithm 4 is the primary method for text watermark detection, then when is Algorithm 3 invoked?\\n\\n2. How is the symbol \\\\Delta (x_i) defined in Algorithm 4? How is it calculated? \\n\\n3. In watermark detection, each token should be evaluated. Why is it necessary to check x_i \\\\in S in line 4 of Algorithm 4?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"In answer to your questions:\\n> In Theorem 1 shouldn't the distribution over $w^*$ be centered at $w_{wat}$ and not 0? This is also present in the proof and Theorem 3. Is this a mistake or my misunderstanding of the statements?\\n\\nYou are correct; these distributions should be centered at $w_{wat}$.\\n\\n> L145 states that Aaronson (2022), Christ (2024) and Fairoze (2023) are based on partitioning the tokens into red and green lists. Can you elaborate on this view of these methods, as my understanding was that they are quite different and do not use the red/green concept?\\n\\nAaronson and Christ et al. use hash functions evaluated on a previous portion of the response to determine a set of tokens whose probabilities to increase in the next sampling step. One can interpret this set of tokens with increased probabilities as the \\u201cgreen\\u201d list, and the other tokens as the \\u201cred\\u201d list. In this view, Aaronson and Christ et al. randomly choose a green/red partition in each sampling step, and increase the probabilities of the green tokens. Fairoze et al. is similar except that the green/red lists consist of n-grams rather than individual tokens. That is, Fairoze et al. randomly choose a set of (\\u201cgreen\\u201d) n-grams whose probabilities to increase in each sampling step.\\n\\n> Were OPT models modified to use the final layer bias to enable experimental evaluation?\\n\\nYes, OPT models were modified to use the final layer bias. We take the unwatermarked model to be OPT with all zeros as the final layer biases, and report quality scores for unwatermarked text as such. The same could be done for a model that is released open-source. Ideally, the biases in the last layer should be trained rather than set to all zero as in OPT. The fact that we assume that the last-layer biases are trained does restrict the class of models we consider; however, this class is still quite broad, and if one wished to release a watermarked model one could easily train the last layer biases.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This submission describes a watermarking approach for open-source LLMs. The submission correctly identifies that a desirable property of a watermark embedded in an open-source model would be that it is not easily removable. The approach injects a uni-gram/zero-context watermark in the sense of Zhao et al. 2024, by modifying the biases of the last linear layer of the model. The authors then formalize a particular notion of unremovability under which they show that this watermark is unremovable.\\n\\nReviewers mainly disagree with applicability of the theoretical findings in this work. The way the authors formalize the notions of unremovable watermark and high-quality text, were found not acceptable. Many reviewers also note that baseline attacks against models with open-source watermarks should just be the same as attacks against watermarked API models, i.e. using adaptive attacks, or paraphrasing.\\n\\nDuring the response, the authors retreat to the position that their proof is sound, but this is not in disagreement. The applicability of the proved unremovability, and statement of the assumptions are in question.\\n\\nOn a different note, from my reading, I would also point out that there are simple baseline attacks that should have been considered. The analysis works based on Gaussian modifications to the bias being hard, but an attack can easily 1) finetune the model to obtain a new bias layer (especially as current-generation models do not derive substantial quality from their last layer bias, if they include one at all), or 2), given that the watermark is based on a last-layer bias modification, it is input independent, and a simple tabulation of unigram probabilities over sufficient quantities of text will approximate the exact bias vector that needs to be removed.\\n\\nFinally, I am not convinced that Assumption 3 holds in the setting proposed in this work, where the watermark is input-independent by design.\\n\\n\\nI do not recommend acceptance of this submission. I do think the review period was productive, and I hope the authors incorporate the received feedback into future versions of their material.\", \"additional_comments_on_reviewer_discussion\": \"This metareview is based on the review of 8Mtu and my own understanding of the submission. I also include interesting comments from p6Eu and p4oF.\"}", "{\"comment\": [\"Thank you for your response.\", \"If I understand correctly, your approach addresses threats in scenarios where an attacker has full white-box access to the model's weights. In this context, the threat model assumes that the attacker aims to remove the watermark from the model without compromising the quality of the text it generates. Your unremovability guarantees suggest that this is either impossible or highly unlikely under these conditions. While the model-based removability guarantees are theoretically interesting, I am concerned that this scenario may not align with practical applications, as most LLM providers typically only offer black-box access. I may be wrong, maybe you should try to explain your motivations better.\", \"For open-source LLMs, an attacker with white-box access is unlikely to alter the model's parameters if high-quality paraphrasing can remove watermarks effectively. I mean, why would I try to remove the watermark from the model's param and risk degrading text quality if I have access to tools like GPT-4, Gemini, Claude, or other well-trained open-source LLMs that can easily generate high quality paraphrased versions of watermarked text. This suggests that unremovability should be inherent to the text itself, rather than dependent on the model, and that, intuitively is impossible.\", \"To strengthen your claims,why not provide empirical evidence demonstrating that your theoretical guarantees hold practical value. A thorough evaluation against various attacks such as translation, paraphrasing, etc, would greatly pass the message across, only if you beat other watermarking algorithms.\", \"For now, I will maintain my current score but encourage you to address these points to make a stronger case for your contributions.\"]}", "{\"summary\": \"The paper focuses on the problem of LLM watermarking and proposes a scheme applicable to open-source models. The key claimed advantage of the scheme is its provable unremovability which the authors rigorously derive. Experimental results on two OPT models are shown, including the evaluation of robustness to token substitution and Gaussian perturbations to model weights.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"As the authors recognize, watermarking of open-source LLMs is one of the most important open problems in current generative model watermarking research, so studying it has the potential for high impact.\"], \"weaknesses\": [\"Unfortunately I believe the paper in its current state is far from being able to deliver that impact. Namely:\", \"While I agree that some definition must exist, formalizing \\\"LLM that produces high-quality text\\\" as \\\"closeness to the original LLM in weights of the last bias layer\\\" seems arbitrary and far from realistic notions of quality. This greatly simplifies the problem, unfortunately making the theoretical results (claimed key contribution) largely not relevant. While I appreciate the rigor and work authors have put in proving the results, formalizing the intuition that a random vector in N-dimensional space is unlikely to match a particular unknown direction, I unfortunately do not think this provides any valuable insight in terms of the robustness of an OSS watermark to realistic transformations.\", \"Given this, the blanket claims that the watermark is \\\"unremovable\\\" (title, abstract, introduction) render as dangerous overclaims that may cause confusion in the field if the paper is accepted. These should be greatly adjusted and qualified to explain the peculiar definition of quality. To actually get a meaningful notion of unremovability, the authors could consider realistic transformations commonly applied to OSS models such as finetuning, PEFT, quantization, pruning, or at least random modification of all weights (the argument on L129/130 is unclear). These are currently neither discussed in theory nor included in the evaluation. Interestingly, the authors recognize that prior work Gu et al. (2024) considers fine-tuning yet do not consider this themselves.\", \"As the authors also recognize, the proposed scheme is a variant of UnigramWatermark. While scheme simplicity is not a weakness per se, interesting/novel technical aspects of the proposed scheme are also not a strength of this paper. This is further harmed by the fact that popular LLMs often do not use the final layer bias, making the proposed scheme inapplicable. In fact, this is true for OPT models used in this work (https://github.com/huggingface/transformers/blob/v4.46.0/src/transformers/models/opt/modeling_opt.py#L1052), bringing into question the current evaluation.\", \"LLM watermarking, which this paper positions itself as part of, generally focuses on detecting LLM-generated outputs. Yet, this paper starts from the related but different notion of detecting that a model was based on a watermarked model from its weights, and prove key results in this case. This is a new scenario which is unexplained and unmotivated, should be explicitly separated from the common understanding of LLM watermarking promised in early parts of the paper, and raises many questions. For example, if we assume transformations of our OSS model change nothing but the final bias layer, can't we use the other (unchanged) weights to demonstrate that the resulting model was made from our model?\", \"Evaluation has many drawbacks, among else it does not include any baseline (such as Gu et al. (2024)), uses high FPRs, and uses no realistic attacks on text such as paraphrasing, generally used in prior work. As the authors note, the performance of the watermark is below non-OSS baselines, which is to be expected, but does not present a case for this method as useful beyond the OSS case.\", \"The paper is written and presented in a confusing and convoluted way, seems to be written in a rush, and it is often very hard to understand the key parts. I include some examples/suggestions below, in hopes that this helps the authors get insight into the issues and improve their writing in the future to the level expected at ICLR. I am happy to further assist the authors here if they have questions.\", \"(Minor) While this does not affect my assessment, L173 contains another dangerous claim, citing Zhang et al. (2024) to say that any LLM watermark is removable. This is a misunderstanding of the original paper which studies an idealized case where a random walk on the space of \\\"equivalent\\\" documents is possible while preserving quality, and the random walk is rapidly mixing. To avoid misinforming readers, this citation should be appropriately qualified.\", \"Overall, while I do appreciate the authors tackling such a hard and important problem, I do not see the contribution of the paper at this point, and believe it thus to be clearly below the bar for acceptance.\", \"---\", \"The list of writing/formatting/presentation comments for the authors follows, which I hope they find helpful. I do not expect authors to reply to each point, although I should be corrected if I misinterpreted some points.\", \"L53: the phrase \\\"unremovability from a modest amount of text\\\" is confusing and should be made more precise\", \"L54-60 seems to repeat the same point about the adversary twice, requiring several readings\", \"I appreciate the inclusion of the Overview section; however, instead of previewing and summarizing the technical details, this section is often the only place where concepts are explained in terms of the concrete instantiation of interest (LLMs). E.g., 4.1. does not reflect on what unremovability means in the setting we consider, but only provides abstract definitions. This makes the paper hard to read and understand the actual instantiation.\", \"Another choice that contributes to this is the use of \\\"content\\\" to counterinuitively often mean \\\"last layer bias vector\\\" instead of \\\"text\\\". Similarly in Alg. 3 it is not made clear if \\\"original content\\\" refers to the watermarked or pre-watermarked model weights; effort by the reader is needed to understand this.\", \"Sec. 2 uses (M*, M') for (original, watermarked) model, inconsistent with ($w^\\\\star, w_{wat}$) below, causing some confusion.\", \"L87: \\\"checking the correlation\\\" is quite imprecise\", \"L104: why the region is a halfspace is not explained; while it is a simple property of dot product, this should be made explicit to help readers grep this part\", \"L107: \\\"add to $w^*$\\\" is unclear. I suspect this should say \\\"the adversary can add a vector to $w_{wat}$\\\" instead; this should be made precise.\", \"L230: Q should probably be L? Such mistakes should be especially avoided in key definitions.\", \"L231: \\\"p.p.t.\\\" should be defined, I believe it is not a standard term in this community\", \"L318: logits $l_i$ seem to refer to values before adding any bias? This is very ambiguous and should be made clear in the writing.\", \"\\\"Quality score\\\" shows up first in Fig. 3 but is not previously introduced which is quite confusing.\", \"The paper has no figures before the evaluation which is highly unusual, especially as there ary many instances where a visualization would greatly aid understanding (e.g., halfspaces / gaussians in the model parameter space). I suggest the authors generally consider this when writing.\", \"The margins on top of each page are very small which suggests the style file was tweaked. Note that the ICLR formatting instructions state \\\"Tweaking the style files may be grounds for rejection.\\\". While the paper doesn't seem to have been desk rejected in this instance, I strongly suggest the authors remedy this and follow the instructions.\"], \"questions\": [\"In Theorem 1 shouldn't the distribution over $w^\\\\star$ be centered at $w_{wat}$ and not 0? This is also present in the proof and Theorem 3. Is this a mistake or my misunderstanding of the statements?\", \"L145 states that Aaronson (2022), Christ (2024) and Fairoze (2023) are based on partitioning the tokens into red and green lists. Can you elaborate on this view of these methods, as my understanding was that they are quite different and do not use the red/green concept?\", \"Were OPT models modified to use the final layer bias to enable experimental evaluation?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Now, the authors claim to have introduced the first watermarking scheme for open-source LLMs. What do they mean by this? There are many watermarking schemes which could be deployed in open source LLMs, so this claim might not be right, as the proposed scheme can also be deployed in closed source LLMs by the model owners. Which leads to the next question. If the LLM is open source, what exactly is the benefit of watermarking when the attacker has direct access to model's weights. Can the authors expand more on their motivation?\\n\\nWhat we mean is that our watermark is the first to have any provable robustness guarantee when the attacker has access to the model\\u2019s weights. While any watermark could be deployed in an open source setting, existing schemes would be trivially removable. In contrast, we prove that even when the attacker has this knowledge, removing our watermark requires degrading the quality of the model. This shows that watermarking indeed has a benefit in an open source setting. The benefit of watermarking when the attacker has direct access to the model\\u2019s weights is that it gives us the capability to identify model-generated content. The fact that the attacker has access to the weights makes it challenging to construct a watermark that is not easily removable; this is exactly the problem we study in this paper.\\n\\n> The proposed approach embeds watermark signals to the bias of the last layer's neurons. There is another approach by ByteDance that injects watermark into the LLM weights by finetuning (https://arxiv.org/pdf/2403.10553). Why is there no comparison with this approach? Infact, why is there no comparison with other watermarking schemes at all?\\n\\nThe ByteDance paper achieves only heuristic robustness but not provable unremovability. We focus on provable robustness/unremovability guarantees, and are the first to achieve any unremovability guarantee in the open-source setting. We do not perform an experimental comparison, as our main contribution is not a scheme with optimized practical parameters. We will include more thorough discussion of related work, such as ByteDance.\\n\\n> There are adaptive ways to bypass watermarks. One is by using adaptive paraphrasers. If the proposed watermark scheme is unremovable, yet detectable, why are there no empirical results proving the 'unremovability' claim using adaptive paraphrasers, or even normal paraphrasers like Dipper, or even using open source LLMs for paraphrasing.\\n\\nOur unremovability guarantee is that an attacker without sufficient knowledge of high-quality text must either compromise quality or fail to remove the watermark. The amount that quality is compromised depends on the attacker\\u2019s knowledge of high-quality text. Using paraphrasers is a practical instantiation of this tradeoff. If the paraphrasers largely preserve text quality and remove the watermark, it is because they leverage knowledge about high-quality text. On the other hand, paraphrasers that do not embody sufficient knowledge about high-quality text will yield poor-quality text but may remove the watermark. We acknowledge that an attacker with access to a high-quality paraphraser, or unwatermarked open-source model, can remove the watermark\\u2013 this is inevitable, and this is why we prove unremovability only against an attacker with limited knowledge.\\n\\n> How efficient is the detection process? How many tokens does it require to detect the proposed scheme, especially using its optimal hyperparameters? I feel the experiments the authors provided to prove the efficiency and strength of this approach are not enough.\\n\\nWe show that our watermark is detectable in 300-token responses. We do not attempt to optimize our parameters, as we aim to show a proof-of-concept and not a deployment-ready scheme. Our primary message is that open-source watermarking is theoretically possible, and that our theoretical modeling assumptions are reasonable enough that our provable results carry over into practice. We leave concretely optimizing the scheme in practice to future work.\"}", "{\"comment\": \"> Can you address the two issues of robustness (to small change in the output) and the undetectability of the output (compared to the non-watermarked model) ?\\n\\nWe prove that the watermark is robust to an adversary that changes the weights of the model; this is why we use the term \\u201cunremovable\\u201d rather than \\u201crobust,\\u201d which typically refers to an adversary that changes the response but not the model. We show empirically but do not prove that our watermark is robust to substitution attacks (Figure 4). We agree that showing how unremovability translates to robustness would strengthen the paper. Our scheme is not distortion-free or undetectable. Like Zhao et al. and Kirchenbauer et al., we instead prove a bound on the amount that our watermark changes the output distribution; our proof is via our quality notion. This fact is buried in the proof of Theorem 1 in the appendix; we will emphasize it in the main body. We also show empirically how quality is affected by the watermark, in Figure 3b.\\n\\n> In your experiments, how do you measure that the utility of the model has not degraded after adding the watermark. I know that you have an oracle that measures the degrading, but then you instantiate the oracle using mathematical formulas regarding the model. But how do you make sure that this reflects the actual quality of the model\\u2019s output? For example, you could use specific metrics or human evaluation methods to assess output quality more directly.\\n\\nIn our experiments, we use Mistral-7B-Instruct as a quality oracle, following similar work (Piet et al.). \\n\\n> Can you discuss why the assumptions are fine? There are 3 explicit assumptions and (multiple) implicit assumptions in the statement of Theorem 1 (eg., \\u201clet C be such that\\u2026\\u201d or \\u201cc_2-high quality\\u2026\\u201d) I think that discussion is needed before calling assumptions reasonable (instead of putting the word reasonable in the theorem statement).\\n\\nWe discuss the assumptions when they are introduced, in Section 5.2 before Theorem 1. If you have already seen this, can you please be more specific about your concerns?\\n\\n> Can you argue either way about the effect of fine tuning in your watermarked model?\\n\\nWe will add discussion about this. We show that removing the watermark either requires knowledge about the distribution of high-quality text, or results in a model with degraded quality. Formally, our result implies that if fine tuning alters only the last layer of the model, it either fails to remove the watermark, or it uses knowledge of the distribution of high-quality text (e.g., via high-quality training data). \\n\\n> In your experiments: can you be more explicit about what your attacker is? e.g., using a pseudocode.\\n\\nYes, we will add pseudocode. We also describe the attacks simply here: In Figures 3a and b, the attacker adds noise to the last layer biases where the noise added to each bias is drawn independently from distributions $N(0, \\\\epsilon^2)$ (1x attack perturbation), from $N(0, (2\\\\epsilon)^2)$ (2x attack perturbation), and $N(0, (5\\\\epsilon)^2)$ (5x attack perturbation) respectively. In Figure 4, the attacker randomly selects a subset of tokens in the response, of the size given on the x axis (the substitution budget). It then substitutes them with random tokens.\"}", "{\"comment\": \"We thank the reviewers for their detailed and constructive comments, and respond here to weaknesses brought up in multiple reviews. We respond to individual questions below.\\n\\n- Unremovability vs. robustness: We consider a far stronger attack setting than existing inference-time watermarks that change the sampling algorithm of the model (e.g., Kirchenbauer et al., Aaronson et al., Christ et al., etc). In our attack setting, the adversary has access to the weights and code for the model. We call resistance to this stronger attack \\u201cunremovability,\\u201d which is similar to robustness although against a stronger attacker. We emphasize that existing robust inference-time watermarks are easily removable\\u2013the attacker can easily modify the code for the sampling algorithm so that it does not embed the watermark at all. As unremovability is a new property, there are no existing works that prove (even a weak form of) unremovability. Therefore, we focus on theoretical, provable unremovability, showing surprisingly that it is possible at all. We reinforce our theoretical results with experiments, which should be viewed as a proof-of-concept rather than evidence that our watermark is optimal compared to inference-time watermarks. While not optimal, we do show standard robustness and quality evaluations used in inference-time watermarks, and our watermark has reasonably good robustness and quality (Figures 3 and 4).\\n- We emphasize that we show that it is possible to have open-source watermarks with provable unremovability and reasonable detection rates and robustness in practice, as shown in our experiments. We use similar experiments as in the literature; we use the same substitution attack robustness evaluation as that of Kuditipudi et al., and we use a similar quality score to that of Piet et al. We acknowledge that compared to inference-time watermarks, ours is not optimal; however, it performs reasonably well.\\nWe especially thank the reviewers for the editorial suggestions and will improve the clarity of the writing.\"}", "{\"summary\": \"The problem of watermarking the output of LLMs has been around for some time. Previous work has focused on changing the output distribution of the LLMs, sometimes in an \\u201cundetectable/distortion-free\\u201d way. But this work starts by making the following point:\\nIf one has access to the parameters of an LLM, they can run it to generate output that is not watermarked. \\n\\nThe main problem of this paper is to watermark the parameters of the model itself, in a way that it both gets reflected in the output + even if one gets their hand on the model parameters, they cannot modify it in a way that the watermark is removed from subsequent outputs.\\n\\nOf course one shall consider attackers who can train the model from scratch. So the paper assumes that doing so is hard (e.g., by making the access to the data generation process costly).\\n\\nThe contribution of the paper is the following. They propose a scheme that works by adding Gaussian noise to the last layer of the model before publishing it. Also knowing the original model and the noise, they show how to verify whether the generated output comes from their model or not.\\n\\nThe paper then makes multiple assumptions to prove that their scheme is secure. The final watermark suffers from not having robustness or undetectability. It is not clear if such weaknesses are inherent or not.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"As far as I know, this is the first work that aims to formally define and address \\u201cunremovable watermarks\\u201d that are planted in open source models.\", \"weaknesses\": \"The paper does not fully address several, by now well-recognized aspects, of the watermark:\\n1. Robustness of the watermarks. E.g., what if one changes even two characters of the produced output? Or that it deletes parts of the output. Here the paper claims it has done experiments but i could not figure out what exact perturbation channels are studied.\\n2. It seems that the output of the watermarked model here is *not* indistinguishable -- sometimes called undetectable or distortion free -- (in comparison with non-watermarked model's output). This is the ultimate way of arguing that the model\\u2019s utility does not degrade after adding the watermark and the paper does not discuss it clearly. Note that here, I am not talking about \\\"removability\\\". This is about the item above (robustness) but rather if the output of the watermarked model differs (in a computationally noticeable way) from the output of non-watermarked model.\\n\\nTo partially address the above issues, the paper should first define clearly what class of perturbation channels they study (and why they are interesting) for the robustness property evolutions (which are seemingly done under the name of Detectability) and for the item 2 above (undetectability of the output -- which is different from the Detectability study) they should design experiments specifically for this goal or make a theoretical assertion.\\n\\nAlso, the proofs are based on multiple assumptions, which make the final conclusion far from ideal. (See my question below)\\n\\nAlso, what happens to the watermarks if the model is fine tuned? (note that black-box methods still work, if the model is fine tuned). This issue should be addressed using experiments. they could be simple experiments that simply test the detectability and utility of the outputs after a fine tuning for specific goals (also see my question below).\\n\\nThe writing also is not great and lacks discussions and justifications with regard to the issue mentioned above (e.g., of the assumptions). \\nOther than the issues above, the intuition behind why this non-black-box approach is working could be much better.\", \"other_minor_comments_on_writing\": \"Def 2 seems to be more like a \\u201csimilarity\\u201d measure, because the loss in quality seems to be different. For example, two models could look very different but have the same quality of responses.\", \"def_4\": \"seems to mix the input space of Q and \\\\ell, right?\", \"questions\": \"Can you address the two issues of robustness (to small change in the output) and the undetectability of the output (compared to the non-watermarked model) ?\\n\\n In your experiments, how do you measure that the utility of the model has not degraded after adding the watermark. I know that you have an oracle that measures the degrading, but then you instantiate the oracle using mathematical formulas regarding the model. But how do you make sure that this reflects the actual quality of the model\\u2019s output? For example, you could use specific metrics or human evaluation methods to assess output quality more directly.\\n\\nCan you discuss why the assumptions are fine? There are 3 explicit assumptions and (multiple) implicit assumptions in the statement of Theorem 1 (eg., \\u201clet C be such that\\u2026\\u201d or \\u201cc_2-high quality\\u2026\\u201d) I think that discussion is needed before calling assumptions reasonable (instead of putting the word reasonable in the theorem statement).\\n\\nCan you argue either way about the effect of fine tuning in your watermarked model?\", \"in_your_experiments\": \"can you be more explicit about what your attacker is? e.g., using a pseudocode.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
0RgLIMh94b
Diffusion Curriculum: Synthetic-to-Real Data Curriculum via Image-Guided Diffusion
[ "Yijun Liang", "Shweta Bhardwaj", "Tianyi Zhou" ]
Low-quality or scarce data has posed significant challenges for training deep neural networks in practice. While classical data augmentation cannot contribute very different new data, diffusion models opens up a new door to build self-evolving AI by generating high-quality and diverse synthetic data through text-guided prompts. However, text-only guidance cannot control synthetic images' proximity to the original images, resulting in out-of-distribution data detrimental to the model performance. To overcome the limitation, we study image guidance to achieve a spectrum of interpolations between synthetic and real images. With stronger image guidance, the generated images are similar to the training data but hard to learn. While with weaker image guidance, the synthetic images will be easier for model but contribute to a larger distribution gap with the original data. The generated full spectrum of data enables us to build a novel "Diffusion CurricuLum (DisCL)". DisCL adjusts the image guidance level of image synthesis for each training stage: It identifies and focuses on hard samples for the model and assesses the most effective guidance level of synthetic images to improve hard data learning. We apply DisCL to two challenging tasks: long-tail (LT) classification and learning from low-quality data. It focuses on lower-guidance images of high-quality to learn prototypical features as a warm-up of learning higher-guidance images that might be weak on diversity or quality. Extensive experiments showcase a gain of 2.7$\%$ and 2.1$\%$ in OOD and ID macro-accuracy when applying DisCL to iWildCam dataset. On ImageNet-LT, DisCL improves the base model's tail-class accuracy from 4.4$\%$ to 23.64$\%$ and leads to a 4.02$\%$ improvement in all-class accuracy.
[ "Synthetic data", "Curriculum Learning", "Diffusion Models" ]
https://openreview.net/pdf?id=0RgLIMh94b
https://openreview.net/forum?id=0RgLIMh94b
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zWOW6OpfAS", "x0X3DWEYDg", "p04h9aZhxw", "lktbNnlGA7", "NF1ESdDtEy", "HuUW7COJkS", "GKPaLwtFrD", "FdBCeGmCc1", "0dZS9xPo7B" ], "note_type": [ "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "comment", "official_comment" ], "note_created": [ 1730083869496, 1731102626392, 1731653294962, 1730271833219, 1731653481340, 1731653254653, 1730122827607, 1731655325514, 1731653443322 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5316/Reviewer_9SrJ" ], [ "ICLR.cc/2025/Conference/Submission5316/Reviewer_2cVr" ], [ "ICLR.cc/2025/Conference/Submission5316/Authors" ], [ "ICLR.cc/2025/Conference/Submission5316/Reviewer_HFGS" ], [ "ICLR.cc/2025/Conference/Submission5316/Authors" ], [ "ICLR.cc/2025/Conference/Submission5316/Authors" ], [ "ICLR.cc/2025/Conference/Submission5316/Reviewer_VQ25" ], [ "ICLR.cc/2025/Conference/Submission5316/Authors" ], [ "ICLR.cc/2025/Conference/Submission5316/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper addresses the challenge from training computer vision models (image classification task) with low-quality or scarce data. The paper proposes Diffusion Curriculum (DisCL) which leverages diffusion models to synthesize hard image examples data with different guidance scales and then utilizes a Generative Curriculum Learning to select appropriate synthetic data from the full spectrum of generated data for training data augmentation. Experiments are conducted on two tasks: long-tail classification and learning from low-quality data, to show the method's effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea of adjusting guidance scales to obtain a greater variety and quality of training data augmentation is interesting and novel.\\n2. Generative Curriculum Learning is reasonable and can adjust for different tasks.\\n2. The proposed method proves its effectiveness in both experiments in long-tail classification and learning from low-quality data.\", \"weaknesses\": \"1. A tradeoff is the speed of generating a full spectrum for images. For large datasets, the diffusion models can consume a long time to generate a full spectrum of needed images.\", \"questions\": \"I am concerned about using only the pre-trained stable diffusion model without further fine-tuning (I am happy to update the rating if the following questions are addressed).\\n1. Were there any observed biases/failures in the synthetic data generated for small resolution datasets (For example CIFAR100-LT)? I am concerned because of the resolution differences between the CIFAR100-LT dataset and the resolution stable diffusion model is trained on.\\n2. In real-life scenarios, some datasets we want to train the models are not real photographs, do you think a pre-trained diffusion model and your proposed method can be effective for Long-Tailed or low-quality datasets in domains of Comics, Drawings, etc?\\n3. Using Clip score as a threshold to filter out generated images is reasonable but for some classes, it can be easy to filter out too many generated data from pretrained stable diffusion (therefore not a full spectrum of data can be generated and saved). May I ask if this situation also occurs in your experiments and could you provide the percentage of filtered images and any strategies you employed to ensure sufficient data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a curriculum learning (CL) method that leverage synthetic image generation as data augmentation in combination with CL algorithm to tackle data-challenging tasks, e.g., long-tailed and low-quality distribution learning. The proposed method, DisCL, generattes images using both text and real images conditioning, with various image guidance scales to regulate the similarity to the real image, allowing for control over the hardness/complexity of the generated sample. CL is applied to select which complexity of samples (image guidance scale) to use based on the task at hand, e.g., for long tail learning an diverse-to-specific CL algorithm is used, while for low-quality image learning an adaptive algorithm is used. A first set of experiments compare DisCL versus baselines using data augmentation or balanced softmax for long tailed classification, showing positive impact mostly on less represented classes. A second set of experiments, test DisCL in the task of low-quality images using the iWildCam dataset. Here, DisCL is plugged to state-of-the-art fine-tuning techniques to show improved performance on both out-of-distribution and in-distribution examples.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The method is simple and seems to work for both long-tailed and low-quality image classification.\"], \"weaknesses\": [\"1. Lacking of some relevant related works. The method does not mention nor compare against methods that already tried to leverage synthetic data as data augmentation to cope with unbalanced data, e.g., Hemmat-Askari et al 2023, or for representation learning / classification e.g., Tian et al 2024a and 2024b, Astolfi et al 2023. In particular, Hemmat-Askari et al 2023 seems quite related as they target the same task and use similar synthetic data generation approach while having some sort of adaptive curriculum learning (feedback guidance) which regulates the type of generation needed by the model. It would be nice to understands how DisCL compare against it. Finally, ALIA ( Dunlap et al 2023) is mentioned as a related work and a baseline in 3.1.2, but never presents in the results.\", \"2. Weak / unclear experimental settings:\", \"Resnet-10 choice is motivated for the comparison with LDMLR; However, most of the comparisons are with CUDA, which uses resent-32 for CIFAR-100 and resenet-50 for ImageNet. Do you expect you results to hold with these larger resnet?\", \"Some experimental details are unclear to me.\", \"it is not clear to me whether baselines and DisCL are trained for the same amount of iterations/epochs.\", \"In the training details the authors say: _\\\"To preserve a constant imbalance-ratio throughout all training stages and experiments, we undersample the non-tail samples at \\\"each stage\\\" so that ratio of tail-samples to non-tail samples matches the proportion of tail classes to non-tail classes present in the original data (13.6%).\\\"_. If I am reading this correctly the authors say that they prefer to maintain imbalanced the dataset, despite having the possibility to rebalancing it with synthetic data. Why this choice?\", \"The results on ImageNet-LT show small improvements w.r.t. to balanced softmax (BS) baseline (+1.5%). By looking at Hemmat-Askari et al 2023 results, the BS baseline is outperformed by a large marging. I understand that the number of generated data on Hemmat-Askari et al is on another scale (1.3M vs. 25K). Do you think the scale is enough to justify this difference?\", \"Also, combining BS with DisCL sometimes leads to lower results than CE + DisCL (see Table 2). Is there any intuition why BS does seem to be as effective for DisCL\", \"Bolding in table 2 is inconsistent\", \"*_Tian, Yonglong, et al. \\\"Stablerep: Synthetic images from text-to-image models make strong visual representation learners.\\\" Advances in Neural Information Processing Systems 36 (2024)._\", \"_Tian, Yonglong, et al. \\\"Learning vision from models rivals learning vision from data.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024._\", \"_Hemmat, Reyhane Askari, et al. \\\"Feedback-guided data synthesis for imbalanced classification.\\\" arXiv preprint arXiv:2310.00158 (2023)._\", \"_Astolfi, Pietro, et al. \\\"Instance-conditioned gan data augmentation for representation learning.\\\" arXiv preprint arXiv:2303.09677 (2023)._\", \"_Dunlap, Lisa, et al. \\\"Diversify your vision datasets with automatic diffusion-based augmentation.\\\" Advances in neural information processing systems 36 (2023): 79024-79034._\"], \"questions\": \"Please respond to the weaknesses listed. Given the many clarification required I consider this work below the acceptance bar.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your valuable feedback! We have carefully reviewed your comments and address them as follows.\\n\\n> Existing works related to image guidance and contribution. \\n\\nOne of our contributions is generating a complete and smooth spectrum of synthetic-to-real data and using this spectrum in training simultaneously. Different data are selected to be used at each epoch. Previous work, such as SyntheticData [1], introduces image guidance to control the similarity between generated images and the original input. However, although they incorporate image guidance in the data generation process (Real Guidance (RG) Strategy in [1]), each training run only utilizes a single image guidance level to produce synthetic images, rather than leveraging multiple guidance levels across training.\\nIn [1], specific image guidance levels are chosen for different few-shot settings, and the corresponding synthetic data is used exclusively for each setting. In contrast, our approach applies the entire spectrum of image guidance levels in a unified training process, offering a richer and more continuous range of synthetic-to-real features for curriculum designs and model adaptation.\\n\\n- [1] He, Ruifei, et al. \\\"Is synthetic data from generative models ready for image recognition?.\\\" arXiv preprint arXiv:2210.07574 (2022).\\n\\n> Questions related to current results.\\n\\nDisCL primarily targets hard samples within the original dataset, focusing on generating synthetic data specifically for underrepresented classes, labeled as \\u201cFew\\u201d classes in Table 1. As shown in Table 1, the accuracy for \\u201cFew\\u201d classes improves from 17.90% (Text-only) and 19.17% (All-Level) to 23.64% (DisCL), highlighting the effectiveness of the curriculum paradigm applied in our method.\\n\\nHowever, given that only 136 out of 1,000 classes (1,643 out of 115,846 samples) fall into the \\u201cFew\\u201d category, the overall improvement in accuracy appears less pronounced, as the impact is more localized to these underrepresented classes.\\n\\n> Ablation studies on CIFAR100-LT and iNaturalist2018.\\n\\nWe use ImageNet-LT as the primary dataset for our long-tail classification tasks and conduct ablation studies on this dataset. We plan to run additional experiments on CIFAR100-LT and iNaturalist2018, to further validate our approach and strengthen the robustness of our findings.\"}", "{\"summary\": \"This paper tries to incorporate the curriculum learning technique into image data augmentation.\", \"this_paper_evaluated_the_proposed_method_on_two_tasks\": \"long-tail classification and image classification with low-quality data to show the effectiveness.\", \"contribution\": \"2\", \"soundness\": \"2\", \"presentation\": \"2\", \"strengths\": \"1. The experiment is well designed.\\n2. The visualization is good.\\n3. A substantial improvement has been achieved for some tasks.\\n4. Combine the curriculum learning into generative data augmentation.\", \"weaknesses\": \"1. In line 87, \\\"We harness image guidance in diffusion models to create a spectrum of synthetic-to-real data\\\". I don't think this is a contribution of yours. In ICLR 2023, one paper called \\\"IS SYNTHETIC DATA FROM GENERATIVE MODELS READY FOR IMAGE RECOGNITION?\\\" has already proposed to leverage both image and text guidance for data augmentation, and there are a lot of following works.\\n\\n2. In the method part, most words are recalling the diffusion theory and image-text guidance, which are both not your contribution. I think the main contribution is how to leverage the various quality data with curriculum learning. However, the Sec. 3.2 is quite short and simple.\\n\\n3. In the ablation part of Table 1, compared with CE + Text-only Guidance (39.10% overall accuracy) and All-Level Guidance (39.40% overall accuracy), the CE + DisCL gets very limited improvement.\", \"questions\": \"In the Table 2 and Table 3, can you provide the results of CE + Text-only Guidance and CE + All-Level Guidance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your valuable feedback! Here we address you comments.\\n\\n> Tradeoff between time and synthetic dataset.\\n\\nDisCL generates synthetic data exclusively for hard samples in the original datasets, covering approximately only 10% of total samples. Additionally, as noted in Appendix A.5, generating a complete spectrum of data across six image guidance levels (at a resolution of 480\\u00d7270) takes only 10 seconds. This generation time is efficient and manageable within these parameters.\\n\\n> Failure cases for low-resolution datasets.\\n\\nFor low-resolution datasets, image editing with diffusion models can be challenging. To address this, we start by using a super-resolved version of CIFAR100 images (resized with CAI super resolution) as the base for synthetic data generation. The generated images are then downsampled to 32\\u00d732 for training.\\nAdditionally, we observed that lower image guidance levels can produce significantly altered synthetic images, creating a large discrepancy between the synthetic and original images. To improve the quality of synthetic data for low-resolution datasets, we adjust the image guidance level to a range of 0.5 to 0.9, ensuring closer alignment with the original images.\\n\\n- 128*128 resized CIFAR100 dataset: https://www.kaggle.com/datasets/joaopauloschuler/cifar100-128x128-resized-via-cai-super-resolution\\n\\n> Application to non-realistic domains.\\n\\nFor datasets in non-realistic domains such as Comics or Drawings, we can first evaluate the generation quality of the pre-trained diffusion model on the target dataset. If the quality is insufficient, we can switch to pre-trained models specifically trained on the corresponding or similar domains.\\n\\nIn the future, we plan to integrate advanced diffusion training techniques, such as ControlNet and DreamBooth, to enhance the compatibility of diffusion models with the original dataset, ensuring higher-quality generation aligned with the domain\\u2019s characteristics.\\n\\n> Questions related to CLIPScore filtering process. \\n\\nWe assess the effect of CLIPScore filtering with human evaluation. To ensure the quality of synthetic data, we choose a relatively high threshold of CLIPScore for data filtering, which results in a high ratio of abandoned data. \\nFor percentage of filtered images is shown as below, which is shown in Table 6 in Appendix.\\n\\n| | Generated Image | Generated Image After Filteration | Acceptance Rate\\n| --- | --- | ---| ---| \\n| ImageNet-LT | 51917 |24141 | 46.50% | \\n| CIFAR100-LT (irb=50) | 2592 | 809 | 31.21% | \\n| CIFAR100-LT (irb=100) | 2144 | 668 | 31.16% | \\n| iNaturalist2018 | 179824 | 75234 | 41.84% | \\n| iWildCam | 197756 | 90093 | 45.56% |\"}", "{\"comment\": \"Thank you for your detailed feedback! We carefully address your comments as below.\\n\\n> Relevant related works regarding data augmentation.\\n\\nThank you for pointing out these related works! We would like to emphasize key distinctions between our approach and these methods.\", \"our_discl_framework_consists_of_two_key_components\": \"generation of **spectrum of sync-to-real** data and the **curriculum learning paradigm** designed to effectively leverage this synthetic data. In contrast, SynCLR (Tian et al., 2024a; Tian et al., 2024b), DAIC-GAN (Astolfi et al., 2023), and Feedback-guided synthesis (Hemmat-Askari et al., 2023) primarily focus on generating data for downstream tasks without integrating curriculum learning.\\nSpecifically, feedback-guided data synthesis proposes to use the Feedback Criteria (Loss / Entropy / Hardness) from the pre-trained model to generate the synthetic data. However, once the data is generated, all synthetic samples are used indiscriminately in training without any selection or progressive filtering.\\n\\nIn DisCL, beyond data generation with various image guidance, we further implement a curriculum learning paradigm to select data at each training epoch. This iterative selection helps the classifier incrementally bridge the gap between synthetic and real data distributions, enhancing model robustness and generalization.\\n\\nThe ALIA result is shown as below. we will update Table 5 in our paper with these results. \\n\\n| | OOD F1 Score | ID F1 Score |\\n| --- | --- | --- |\\n| ALIA | 36.9 (0.3) | 52.6 (0.4) |\\n| DisCL | **38.2 (0.5)** | **54.3 (1.4)** |\\n\\nFor experiments with larger models (ResNet32 and ResNet50), we expect our result will still hold. We will run them and use the results to strengthen our paper.\\n\\n\\n> Questions related to training settings.\\n\\nFor question related to training epochs, in our experiments, we always keep the total training epochs of DisCL less or equal to that of the baselines.\\n\\nFor question about balanced ratio, DisCL is designed to improve performance on hard samples in the original dataset, specifically targeting \\u201cFew\\u201d classes in long-tail classification scenarios. In the data generation process, we focus exclusively on generating synthetic data for underrepresented classes, excluding \\u201cMedium\\u201d classes. Furthermore, generating synthetic data to achieve a fully balanced dataset would be both time- and resource-intensive. Thus, we maintain the original imbalance ratio in the generated dataset, allowing us to evaluate DisCL\\u2019s performance without changing population bias due to class distribution changes. To further validate our approach, we plan to conduct additional experiments using balanced synthetic datasets, aiming to strengthen our findings.\\n\\n> Questions related to current results compared with Hemmat-Askari et al 2023.\\n\\nFor the difference between Feedback-guided synthesis (Hemmat-Askari et al., 2023) and DisCL, one key difference is the scale of the synthetic dataset used (1.3M samples vs. 25k in DisCL). Additionally, Feedback-guided synthesis generates synthetic data to augment the entire real dataset, whereas DisCL focuses on enhancing model performance specifically on challenging samples within the original dataset. For long-tail classification tasks, DisCL selectively generates synthetic data only for underrepresented classes (few classes in the original dataset). To further investigate this distinction, we plan to conduct additional experiments that control for data scale and class distribution.\\n\\n> Questions related to Table 2 results.\\n\\nThank you for your questions and pointing out! We will correct our Table 2 as follows.\\n| Imbalance Ratio=100 | Many | Medium | Few | Overall |\\n| --- | --- | --- | --- | --- |\\n| CE | 52.86 | 25.34 | 5.49 | 29.02 | \\n| CE + CUDA | **54.55** | **26.07** | 5.43 | 29.85 | \\n| CE + DisCL | 53.14 | 25.52 | **10.65** | **30.91** | \\n| BS | 47.87 | 30.07 | 14.41 |31.61 | \\n| BS + CUDA | 48.01 | **32.79** | 15.55 | 33.02 | \\n| BS + DisCL | **49.02** | 29.02 | **19.07** | **33.08** | \\n\\nSince the Balanced Softmax (BS) function helps mitigate bias between dominant and underrepresented classes, BS+DisCL still achieves better performance than CE+DisCL, as DisCL continues to work with an imbalanced training dataset.\"}", "{\"summary\": \"Training deep learning models with low-quality or limited amounts of data often results in overfitting or suboptimal performance. To overcome this challenge, data augmentations have been an integral part of training deep learning models. However, classical data augmentations offer limited diversity and may also result in out-of-distribution samples, hampering the performance of the model. Therefore, recent research has focused on using generative models for data augmentations. Building in this direction, the authors propose a method to create a spectrum of interpolations between synthetic and real images called Diffusion Curriculum (DisCL). Focusing on the long-tail classification and learning from low-quality data tasks, the author demonstrates the efficacy of DisCL.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors address the gap between real and synthetic data generated using diffusion models by designing a generative curriculum that can adjust the quality, diversity, and difficulty of the data for different training stages. This provides a new perspective on generative data augmentation, given that the majority of prior work considered a fixed image guidance scale throughout the training.\", \"The effectiveness of generative data augmentation strategies primarily depends on the performance of the generative model. To address the potential shortcomings of the generative models and thereby improve the effectiveness of the data augmentation, the authors proposed using CLIPScorer to filter out low-fidelity images.\"], \"weaknesses\": [\"The paper is difficult to follow. This is partly because key details are missing in the main paper. For instance, in Section 4.1, the authors mention the use of a set of diverse textual prompts, while the details are deferred to the appendix. Another instance is in Section 4.2, where the authors mention that inspired by DoCL, they propose an adaptive curriculum. However, there is no discussion of the proposed adaptive curriculum in that section.\", \"The concept that the choice of the starting timestep $t$ controls the impact of $z_{real}$ has been extensively studied in prior works, notable being SDEdit [1]. Therefore, it would be easier for the readers to follow if the authors cite the existing works and explain the similarities.\", \"Using generative models for data augmentation has been an active area of research, with many approaches proposed in the literature [2,3,4,5]. The authors can compare their approach with these existing works to substantiate their novelty and demonstrate the impact of using a pre-defined or adaptive generative curriculum. The current evaluation is limited.\"], \"references\": [\"Meng, Chenlin, et al. \\\"Sdedit: Guided image synthesis and editing with stochastic differential equations.\\\" arXiv preprint arXiv:2108.01073 (2021).\", \"Roy, Aniket, et al. \\\"Cap2aug: Caption guided image to image data augmentation.\\\" arXiv preprint arXiv:2212.05404 (2022).\", \"Luzi, Lorenzo, et al. \\\"Boomerang: Local sampling on image manifolds using diffusion models.\\\" arXiv preprint arXiv:2210.12100 (2022).\", \"Koohpayegani, Soroush Abbasi, et al. \\\"GeNIe: Generative Hard Negative Images Through Diffusion.\\\" arXiv preprint arXiv:2312.02548 (2023).\", \"Trabucco, Brandon, et al. \\\"Effective data augmentation with diffusion models.\\\" arXiv preprint arXiv:2302.07944 (2023).\"], \"questions\": [\"Are \\u2018diverse to specific\\u2019 and \\u2018easy to hard\\u2019 curriculum strategies the same? If so, why are they called differently?\", \"In Table 1, for the \\u201cFew\\u201d class, the impact of DisCL is more significant when using Cross Entropy compared to when Balanced Softmax is used. Why?\", \"One of the benchmarks for learning from low-quality data is ALIA. Which Table contains the results with ALIA?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thank you for your feedback and suggestions! We provide responses to your questions below.\\n\\n> Detailed information for Section 4.\\n\\nThanks for your advice! The details of DoCL and its adaptive curriculum are discussed in Appendix A.3.2. We will try our best to move back more details from the Appendix to the main paper.\\n\\n> Related works about using generative models for data augmentation. \\n\\nThank you for pointing out these related works! We would like to clarify key distinctions between our approach and these methods.\", \"our_discl_framework_is_built_on_two_primary_components\": \"the generation of a **spectrum of sync-to-real synthetic data** and a **curriculum learning paradigm** designed to effectively leverage this data. Unlike previous works, which focus primarily on synthetic data generation for downstream tasks without integrating curriculum learning, DisCL strategically bridges the distributional gap between synthetic and real data.\\nFor example, Cap2Aug (Roy et al., 2022) generates data with fine-grained modifications, while Boomerang (Luzi et al., 2022) synthesizes data through local sampling to closely resemble the original inputs. Both methods focus on localized changes and thus lack the diversity needed for long-tail tasks and easier or typical samples needed for low-quality tasks.\\nGeNIe (Koohpayegani et al., 2023) generates hard negative samples to improve the classifier\\u2019s ability to distinguish between positive and negative samples. However, for low-quality tasks like iWildCam, the primary challenge is aligning visual features of hard positive samples with pre-trained knowledge\\u2014an issue that GeNIe does not fully address. DA-Fusion (Trabucco et al., 2023) increases synthetic data diversity, yet it lacks either the spectrum of varied synthesis or the curriculum-driven approach that DisCL provides. \\nDisCL\\u2019s curriculum-based framework constructs a smooth transition from synthetic data (representing prototypical or diverse features) to real data (which contains task-specific but often limited features). This smooth spectrum aids in adapting pre-trained models more effectively to low-quality or long-tail tasks.\\nWe plan to explore integrating image editing approaches from these works with DisCL\\u2019s spectrum generation and curriculum paradigm to further enhance our results.\\n\\n> Question related to ALIA results.\\n\\nThe results of ALIA are shown below. We will update the result in Table 5. \\n\\n| | OOD F1 Score | ID F1 Score |\\n| --- | --- | --- |\\n| ALIA | 36.9 (0.3) | 52.6 (0.4) |\\n| DisCL | **38.2 (0.5)** | **54.3 (1.4)** |\\n\\n> Question related to curriculum strategies. \\n\\nThe 'diverse to specific' and 'easy to hard' curriculum strategies both refer to 'synthetic (lower image guidance) to real (higher image guidance)' curriculum strategies (as mentioned in line 360 and 407). However, for different tasks, synthetic data has different properties. For long-tail task, synthetic data with lower image guidance provides more diversified visual features, while providing easier and more proto-typical features for low-quality tasks. To illustrate the property of synthetic data under different tasks, we use these two names. \\n\\n\\n> Questions related to Cross Entropy and Balanced Softmax.\\n\\nDue to the population bias among classes, the model struggles to sufficiently learn visual features from underrepresented \\u201cFew\\u201d classes while using Cross Entropy loss function. As a result, the Cross-Entropy (CE) baseline performs significantly worse compared to the Balanced Softmax (BS) loss, which explicitly addresses class imbalance by reweighting softmax probabilities based on class frequency.\\n\\nWhen applying DisCL, our method introduces more diverse features for underrepresented classes, partially mitigating the effects of population bias and improving performance under CE. However, for Balanced Softmax, which already compensates for class imbalance directly, the additional impact of DisCL\\u2019s diverse features and samples is less pronounced.\\n\\n- Meng, Chenlin, et al. \\\"Sdedit: Guided image synthesis and editing with stochastic differential equations.\\\" arXiv preprint arXiv:2108.01073 (2021).\\n- Roy, Aniket, et al. \\\"Cap2aug: Caption guided image to image data augmentation.\\\" arXiv preprint arXiv:2212.05404 (2022).\\n- Luzi, Lorenzo, et al. \\\"Boomerang: Local sampling on image manifolds using diffusion models.\\\" arXiv preprint arXiv:2210.12100 (2022).\\n- Koohpayegani, Soroush Abbasi, et al. \\\"GeNIe: Generative Hard Negative Images Through Diffusion.\\\" arXiv preprint arXiv:2312.02548 (2023).\\n- Trabucco, Brandon, et al. \\\"Effective data augmentation with diffusion models.\\\" arXiv preprint arXiv:2302.07944 (2023).\"}" ] }
0Ra0E43kK0
CaLMol: Disentangled Causal Graph LLM for Molecular Relational Learning
[ "Peiwen Li", "Xin Wang", "Zeyang Zhang", "Linxin Xiao", "Yang Li", "Wenwu Zhu" ]
Molecular Relational Learning (MRL), focused on understanding interactions between molecular pairs, is essential for drug design by utilizing both structural properties and textual knowledge, such as expert documents. However, most existing MRL methods assume static molecular distributions, meaning the distributions remain consistent across training and testing stages. This assumption may lead to the exploitation of variant correlations between structures and texts regarding interactions, thereby failing in the ubiquitous scenarios involving new drug predictions. To bridge this gap, we investigate zero-shot MRL by leveraging invariant relationships between molecular texts and structures w.r.t interactions for new molecules, which is largely unexplored in the literature and is highly non-trivial with following challenges: 1) How to disentangle molecular structure components between each pair to intrinsically determine interactions and address potential structural distribution shift issues for new drugs? 2) How to align molecular structures with semantic textual information to achieve invariant molecular relation predictions for new drugs? To tackle these challenges, we propose a novel Causally Disentangled Invariant Graph Large Language Model (LLM) for Molecular Relational Learning (CaLMol), capable of exploiting invariant molecular relationships to predict interactions for new drugs. Specifically, we propose Causal Molecule Substructure Disentanglement to capture the invariant well-recognized substructure pair for a specific molecule interaction. Then, we propose Molecule Structure and Property aware LLM Alignment to use molecule (with invariant substructure)-textual property pair to align structure information to semantic information, and use them together to guide the interaction prediction. On this basis, LLM can also provide further explanations. Extensive experiments on qualitative and quantitative tasks including 7 datasets demonstrate that our proposed CaLMol achieves advanced performance on predicting molecule interactions involving new molecules.
[ "Molecular Relational Learning", "Large language Model", "Graph Neural Network", "Causal Learning" ]
Reject
https://openreview.net/pdf?id=0Ra0E43kK0
https://openreview.net/forum?id=0Ra0E43kK0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x6MVe6EEba", "lfb1m5bLJV", "dV5G5eUl2U", "XhLhPQpwVK", "SKvtLA1iSK", "KcIRFqHMde", "H2dgSgQsIP" ], "note_type": [ "official_review", "meta_review", "official_comment", "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1730365148558, 1734407348415, 1733315360481, 1730435583544, 1737524010382, 1730652649437, 1730355921965 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9855/Reviewer_9ZZt" ], [ "ICLR.cc/2025/Conference/Submission9855/Area_Chair_yEGL" ], [ "ICLR.cc/2025/Conference/Submission9855/Authors" ], [ "ICLR.cc/2025/Conference/Submission9855/Reviewer_jTk6" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9855/Reviewer_Aib3" ], [ "ICLR.cc/2025/Conference/Submission9855/Reviewer_Y1vr" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces CALMOL, a causally disentangled invariant graph large language model (LLM) tailored for molecular relational learning (MRL), with a particular focus on zero-shot scenarios requiring predictions of new molecular interactions. By integrating Graph Neural Networks (GNNs) with LLMs, CALMOL captures causal structural relationships and aligns molecular structures with semantic information, thereby improving predictions in drug design and molecular interaction studies. Overall, this paper is highly intriguing and meaningful, but there are several issues that require attention.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The starting point of this paper is interesting; exploring causal substructures with large models is indeed an engaging and meaningful topic.\\n2. Generalization and Robustness: By leveraging invariant relationships across molecular structures and text, CALMOL effectively addresses distribution shifts between known and new drugs, thus enhancing generalization to unseen molecules. CALMOL maintains consistent performance across various dataset splits (Section 4.1).\", \"weaknesses\": \"1. **Assumption on Molecular Distributions**: The paper claims that most existing MRL methods assume the same molecular distributions. However, I rarely encounter papers that explicitly make assumptions about molecular distributions, and the term \\\"molecular distributions\\\" is somewhat ambiguous, requiring further clarification. To substantiate this claim, I would recommend that the authors provide specific examples of existing MRL methods that make this assumption or clarify precisely what they mean by \\\"molecular distributions\\\" in this context.\\n\\n2. **Effectiveness of Molecular Feature Extraction**: The model only uses SMILES information during the modality alignment process, yet SMILES is also provided in the input. This raises questions about the effectiveness and actual contribution of molecular graph feature extraction. I suggest the authors clarify the role and contribution of molecular graph feature extraction in their model, given that SMILES information is used in multiple stages. An ablation study or analysis showing the added value of graph feature extraction over using SMILES alone would be helpful in addressing this concern.\\n\\n3. **Novelty of the Method**: The method\\u2019s novelty is questionable; the paper seems to merely link motif sets\\u2019 causal motif extraction with LLMs in a fairly straightforward manner, without a clear motivation. Additionally, the paper claims that the LLM provides further interpretability, yet no relevant case study is provided in the experimental section to support this. I suggest that the authors provide a more detailed comparison with existing methods that combine causal motif extraction and LLMs, highlighting any specific innovations in their approach. Including a case study or examples demonstrating the enhanced interpretability claimed for their LLM-based approach would strengthen the paper.\\n\\n4. **Interpretability Challenges**: While CALMOL offers causal substructure explanations, the interpretability of predictions could be improved. Providing more detailed analyses or visual examples would better illustrate how causal substructure disentanglement directly impacts interaction predictions (Section 3.1). This could offer greater clarity on the added interpretability benefits of the model.\\n\\n5. **Dependency on LLMs**: Due to computational demands, CALMOL\\u2019s reliance on large language models may limit its applicability in resource-constrained environments. Furthermore, the paper does not clearly demonstrate any significant advantage of LLMs in this domain. I suggest the authors provide a more detailed discussion of the computational requirements of their model, ideally comparing performance versus computational cost with non-LLM methods. Specific examples or analyses that demonstrate the unique advantages that LLMs bring to molecular relational learning tasks would also help to substantiate this aspect.\", \"questions\": \"1. Please provide specific examples of existing MRL methods that make this assumption about molecular distributions, or clarify precisely what is meant by \\\"molecular distributions\\\" in this context. Are the authors referring to \\\"element distribution\\\" or \\\"atom distribution\\\"? Providing this clarification will help address the concern more directly and substantiate the authors' claims.\\n\\n\\n2. The model input includes both the molecular graph information and the SMILES representation; it seems an additional ablation study is needed to demonstrate the effectiveness of both modalities like MolCA .\\n\\n\\n3. After obtaining the substructure based on causal theory, why is it necessary to input it into a large language model rather than making a direct prediction? Does this approach truly improve the final predictive results? Furthermore, while the manuscript mentions that llm could enhance interpretability, I could not find any experiments or examples to support this claim.\\n\\n\\n4. With the introduction of a LLM, the model's complexity and resource consumption should be compared with that of conventional models to verify the necessity of incorporating LLMs, allowing for a more comprehensive evaluation.\\n\\n\\n5. More llm-based model are needed as baseline to verify CALMOL's performance.\\n\\n\\n\\n[1] MolTC: Towards Molecular Relational Modeling In Language Models\\uff1b\\n\\n[2] MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"### Summary\\nThe paper introduces CaLMol, a model for molecular relational learning (MRL) in a zero-shot setting. It integrates causal disentanglement and semantic alignment between Graph Neural Networks (GNNs) and Large Language Models (LLMs) to predict drug-drug interactions (DDI) and solute-solvent interactions (SSI). The method extracts causal substructures of molecules to enhance generalization to unseen data. Experiments on multiple datasets demonstrate performance improvements, though the contributions remain incremental.\\n\\n### Strengths\\n- The paper addresses molecular relational learning in a zero-shot setting, which is both practical and underexplored, particularly for unseen drugs and molecules.\\n- The idea of extracting functional causal substructures and aligning them with semantic information via LLMs is innovative and adds interpretability.\\n- Experiments on DDI and SSI tasks demonstrate effectiveness, with CaLMol showing consistent performance improvements across several benchmarks.\\n\\n### Weaknesses\\n- Key concepts such as \\\"causal substructures\\\" and \\\"molecular distributions\\\" are poorly defined, and important implementation details (e.g., optimization signals, weight calculations) are missing or unclear.\\n\\n- The method appears to be an incremental extension of MolTC, combining causal motif extraction and LLMs without substantial innovation. Clear differentiation from related work is lacking.\\n\\n- Incomplete Ablation and Analysis: There is no clear ablation study to show the individual contributions of the GNN and LLM components.\\nThe model's reliance on LLMs raises concerns about computational efficiency, which is not adequately discussed or compared to simpler methods.\\n\\nWhile CaLMol addresses an important and interesting problem in molecular relational learning, the paper suffers from poor methodological clarity, limited novelty, and incomplete experimental analysis. The lack of rigorous ablation studies and unclear differentiation from prior work (e.g., MolTC) weakens the strength of its contributions. Improvements in clarity, theoretical justification, and additional analyses are needed to validate the method's significance.\", \"additional_comments_on_reviewer_discussion\": [\"The major concerns raised by the reviewers are:\", \"Lack of Methodological Clarity: Key concepts, such as causal substructures, molecular distributions, and optimization principles, are not well-defined. Important implementation details (e.g., weight calculations, supervised signals, and disentanglement losses) are unclear or missing, leading to confusion.\", \"Limited Novelty: The method appears to be an incremental extension of MolTC, combining existing ideas (e.g., causal motif extraction and LLMs) without sufficient innovation. The paper does not clearly differentiate CaLMol from prior work, reducing its perceived contribution.\", \"Incomplete Experiments and Analysis: The ablation study is insufficient, failing to clarify the contribution of individual components (e.g., LLM vs. GNN). Computational complexity, efficiency, and comparisons to non-LLM baselines are not adequately addressed, raising concerns about practical applicability.\", \"Rather than addressing the reviews point-by-point, the authors uploaded a revised manuscript, which makes it hard to follow which points were indeed addressed. In general, the novelty concerns have not been addressed.\"]}", "{\"title\": \"Summary\", \"comment\": \"Dear reviewers,\\n\\nWe sincerely thank all the reviewers for dedicating your valuable time and effort to evaluate our work. \\n\\nWe would like to summarize the revised paper **`pdf`** as below:\\n\\n1. We updated a more concrete `Figure 1`, to illustrate the motivation of our work: MRL is driven by causal substructure pair and related property. The interaction between these two drugs is primarily driven by the imidazole ring in fluconazole, which inhibits the CYP2C9 enzyme responsible for metabolizing the coumarin core in warfarin. This inhibition slows down the breakdown of warfarin, causing its concentration to increase in the bloodstream, which heightens the risk of excessive anticoagulation and bleeding.\\n2. We have included a more comprehensive ablation study in `Appendix B` to evaluate each component of our model.\\n\\nAlthough the rebuttal period is limited, we sincerely hope our responses have addressed your concerns and provided greater clarity about our work. We are committed to further refining our research based on your valuable feedback!\"}", "{\"summary\": \"This work presents CalMol, a molecular relationship learning framework based on large models and disentanglement. CalMol consists of two main parts: a causal substructure extraction module and a multimodal large model fusion module. The causal substructure extraction module learns the core substructures of molecules by decomposing the target molecule and studying the substructures in contact between pairs of molecules. The multimodal large model fusion module integrates natural language instructions with SMILES and graphical representations of molecules and core substructures into LLM for downstream tasks by constructing prompts. This work is based on MolTC, with the addition of a causal substructure extraction module. The authors evaluated CalMol on DDI (drug-drug interaction) and SSI (solute-solvent interaction) tasks, where CalMol achieved comparative performance on the DDI task and notable performance on the SSI task.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This work presents CalMol, a molecular relationship learning framework based on large models and disentanglement, which achieved comparative performance on the DDI task and notable performance on the SSI task. Extracting the causal substructures of molecules is an interesting topic.\", \"weaknesses\": \"1. The authors believe that existing methods rely on \\\"variant molecular structures\\\", which hinders their performance, but there is a lack of a clear definition of \\\"variant molecular structures\\\".\\n2. For a molecule, the substructures that play a key role may vary when it binds with different molecules, i.e., the so-called core substructures are not fixed. Therefore, it is not rigorous enough to determine the core substructures of a molecule with just one set of relationships.\\n3. Using a substructure of a molecule as its causal substructure is somewhat far-fetched, especially for larger molecules.\\n4. The supervision signal and loss function used in the substructure learning stage are unclear.\\n5. The authors propose to make the disentangled spurious part S approach a random distribution, but the rationale for doing so is not explained.\\n6. There is a lack of necessary ablation experiments, such as whether the disentanglement module is effective and whether the several disentanglement losses are necessary.\", \"questions\": \"As stated in the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents CaLMol, a model for molecular relational learning (MRL) that uses a combination of Graph Neural Networks (GNNs) and Large Language Models (LLMs) to predict drug-drug (DDI) and solute-solvent (SSI) interactions in a zero-shot setting. The model\\u2019s innovative approach in leveraging causal disentanglement and aligning molecular structures with semantic information provides a promising direction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper combines causal disentanglement and semantic alignment between GNN and LLM, allowing for a comprehensive understanding of molecular interactions.\", \"By targeting unseen molecules, CaLMol addresses an important area in MRL, providing potential for applications involving new drugs or compounds.\", \"The model is evaluated across multiple datasets, showing improvements in accuracy over several baselines, which demonstrates its effectiveness in specific zero-shot tasks.\", \"The paper is well-written and easy to follow.\"], \"weaknesses\": \"See Questions.\", \"questions\": [\"Could the authors provide additional analysis on the computational complexity of CaLMol? How about the comparison with these baselines in training time and inference time?\", \"More detail about interpretability cases and analysis should be provided to support the advantage of CaLMol.\", \"In Table 1, it is evident that the three datasets for DDI classification present a highly imbalanced binary classification task; however, the results shown for CaLMol in Table 2 perform poorly on AUC-ROC, which is a crucial metric for imbalanced data.\", \"Given the model\\u2019s dependency on selected datasets, how would the authors suggest extending the approach to larger and more diverse datasets? For example, Drug-Target Interaction (DTI) is also a significant task in drug discovery; demonstrating that CaLMol is useful in this task would enhance its practical significance.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a method to keep the invariant between molecular structures and semantic texts under a zero-shot scenario. The topic is interesting, and the experimental results look positive. Unfortunately, the paper is vague and lacks clarity both in the description of the technical approach and in the construction of the proposed datasets used for training.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The topic is valuable and interesting. Introducing functional substructures based on LLM makes it intuitive to predict potential molecular interactions.\", \"weaknesses\": \"1. How to introduce supervised signals to optimize the weights between motifs from different molecules is confusing, and it is suggested that the authors provide more details to clarify the principles of calculating the weights between motifs, and what the symbol \\\\hat{Y}_C, \\\\hat{Y}_S, \\\\hat{Y} stand for.\\n\\n2. The core idea of CalMol is similar to MolTC[1], the authors should clarify the key difference between them.\\n\\n3. The ablation study is limited, the authors should further discuss the contribution of the LLM backbone. Besides, the contribution of casual GNN is weak in the DDI prediction task, but it shows strong promotion on SSI prediction, the authors can discuss this phenomenon.\\n\\n[1] Fang, J., Zhang, S., Wu, C., Yang, Z., Liu, Z., Li, S., ... & Wang, X. (2024). Moltc: Towards molecular relational modeling in language models. arXiv preprint arXiv:2402.03781.\", \"questions\": \"see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0RUQmLFF1D
Is What You Ask For What You Get? Investigating Concept Associations in Text-to-Image Models
[ "Salma Abdel Magid", "Weiwei Pan", "Simon Warchol", "Grace Guo", "Junsik Kim", "Mahia Rahman", "Hanspeter Pfister" ]
Text-to-image (T2I) models are increasingly used in impactful real-life applications. As such, there is a growing need to audit these models to ensure that they generate desirable, task-appropriate images. However, systematically inspecting the associations between prompts and generated content in a human-understandable way remains challenging. To address this, we propose Concept2Concept, a framework where we characterize conditional distributions of vision language models using interpretable concepts and metrics that can be defined in terms of these concepts. This characterization allows us to use our framework to audit models and prompt-datasets. To demonstrate, we investigate several case studies of conditional distributions of prompts, such as user defined distributions or empirical, real world distributions. Lastly, we implement Concept2Concept as an open-source interactive visualization tool facilitating use by non-technical end-users. *Warning: This paper contains discussions of harmful content, including child sexual abuse material and NSFW material, which may be disturbing to some readers.
[ "text-to-image", "vision-language", "computer vision", "interpretability", "alignment", "fairness", "safety" ]
Reject
https://openreview.net/pdf?id=0RUQmLFF1D
https://openreview.net/forum?id=0RUQmLFF1D
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wU0qbrGnJn", "uyuAM51rDm", "umGYUzyyEa", "u9hMGBl8SD", "riBRxhAj2K", "pRHkYwRRej", "oRvdw6k6cV", "k6jCfUrB8B", "idDOvL0RGz", "gYgqdG8rLn", "eo08fTmZFv", "duHXZuRl0w", "brayVpML9B", "bRdVkKuNxX", "W7kr7iAOB5", "U96fHpLmA6", "TqoF5xKOGY", "SZYeCZlGu9", "PXUirK6lSY", "Mtccp6Sael", "LtKqPqDTKo", "KrwJuJrxTS", "JgdNumG2zF", "JYQz9AoxgX", "JPpSN9qoFV", "IlmNJi4yAk", "HvIIG8i4kO", "B1isAGrHzm", "AYvmtgQOnc", "9IvOLW3zqE", "8mEsw6m2Mt", "7Tw0peY9Il", "3X53WcI3W0", "30pGNAXyc6" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1731076756818, 1733247696479, 1730457501264, 1733221215477, 1732699106191, 1730672667384, 1732358819953, 1732359175012, 1732357898044, 1732358488596, 1732359420033, 1732359401315, 1733248091354, 1732424930398, 1732424324150, 1732358344428, 1737524130630, 1732424734994, 1732359581954, 1732358974313, 1732359108068, 1733193771099, 1733245195204, 1732358178111, 1734560364467, 1732532574575, 1732570621213, 1732589801090, 1732424507227, 1732697808300, 1732359703632, 1729451490421, 1732357799247, 1732789695960 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11549/Reviewer_8F6o" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Reviewer_bRZR" ], [ "ICLR.cc/2025/Conference/Submission11549/Reviewer_eKz3" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Reviewer_3MPH" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Area_Chair_CnFc" ], [ "ICLR.cc/2025/Conference/Submission11549/Reviewer_eKz3" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Reviewer_8F6o" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Reviewer_bRZR" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Reviewer_eKz3" ], [ "ICLR.cc/2025/Conference/Submission11549/Authors" ], [ "ICLR.cc/2025/Conference/Submission11549/Reviewer_eKz3" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose Concept2Concept, a framework that characterizes the conditional distributions of vision-language models using interpretable concepts and metrics. This enables systematic auditing of both models and prompt datasets. Through case studies, they analyze various prompt distributions, including user-defined and real-world examples. Concept2Concept is also an open-source interactive visualization tool, making it accessible to non-technical users.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This work addresses the important challenge of auditing text-to-image (T2I) models to assess their reliability, fairness, and bias.\", \"The authors introduce an interpretation of concept distributions, which forms the basis for their marginal and conditional distribution notations.\", \"Through various case studies\\u2014including bias analysis, disability representation, and model misalignment\\u2014the authors explore essential aspects of T2I model auditing.\"], \"weaknesses\": [\"The primary innovation of the paper lies in interpreting distributions over concepts, leading to the marginal and conditional distributions defined in Equation 3 and summarization metrics in Equations 4-6. However, the connection between these two sets of equations is not well-explained, making it difficult to understand how they are conceptually or mathematically linked.\", \"Although marginal and conditional distributions are defined for continuous distributions, the summarization metrics\\u2014concept frequency, concept stability, and concept co-occurrence\\u2014are framed in discrete terms. The authors do not provide a derivation or proof to clarify the connection between continuous and discrete cases, leaving this foundational aspect unclear.\", \"The authors mention addressing uncertainty from off-the-shelf object detectors by sampling from a distribution of concepts. However, they provide little information on the practical implementation of this approach, making it challenging to interpret how this sampling is achieved or how effective it is in managing uncertainty.\", \"To address the uncertainty introduced by the object detector, the authors need a more comprehensive analysis, particularly in handling cases where the detector may be over-confident or under-confident. A systematic empirical study to quantify and validate this uncertainty would greatly improve clarity and demonstrate how well the framework manages these corner cases.\", \"The metrics introduced by the authors\\u2014concept frequency, concept stability, and concept co-occurrence\\u2014resemble the validity, proximity, and diversity metrics used for counterfactual explanations as defined in [1]. However, there appears to be no discussion connecting these proposed metrics to previous work on counterfactual explanations.\", \"[1] Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations\"], \"questions\": [\"Could you clarify the conceptual and mathematical connection between the marginal and conditional distributions in Equation 3 and the summarization metrics in Equations 4-6? An explanation of how these are linked would help in understanding the core framework.\", \"Since the marginal and conditional distributions are defined for continuous distributions, while the summarization metrics are based on discrete cases, could you provide a derivation or rationale that bridges these two? How do you address this foundational difference?\", \"You mention handling uncertainty from object detectors by sampling from a distribution of concepts, but the practical details of this approach are unclear. Could you elaborate on how this sampling is implemented and how effective it is in managing detection uncertainty?\", \"Given the similarities between concept frequency, concept stability, and concept co-occurrence and the metrics used in counterfactual explanations (e.g., validity, proximity, and diversity), could you discuss any connections or differences between your proposed metrics and those commonly used in counterfactual work?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Global Response to Reviewer Feedback\", \"comment\": \"Dear Area Chairs and Reviewers,\\n\\nWe are deeply grateful for the reviewers' thoughtful and constructive feedback, which has been instrumental in refining and strengthening our work. We are delighted that the paper\\u2019s contributions, including our proposed framework, metrics, case studies, and findings, resonated with the reviewers, many of whom praised its clarity, relevance, and significant research impact.\\n\\nWe are also pleased to share an update regarding our interactive tool, which is based on the theoretical framework presented in the paper. During the discussion period, we provided a demo of the tool to reviewers, showcasing its practical capabilities in auditing T2I models. [https://colab.research.google.com/drive/1k3StsQhXXgGAYCpXoSmK3o_CxfosAdoe?usp=sharing] *Please scroll down through the notebook and through the widget itself.* Reviewer eKz3 personally tested the tool and confirmed its value by uncovering specific biases. This hands-on validation of the tool\\u2019s functionality and utility led the reviewer to increase their score from 6 to 8. We are thrilled that the interactive tool resonated so positively and demonstrated its potential impact. \\n\\n**Highlights and Strengths**\", \"reviewers_acknowledged_the_importance_and_novelty_of_our_work\": \"**1. Framework and Metrics.**\\nOur framework and metrics were recognized for their utility in addressing key challenges in auditing text-to-image (T2I) models:\\nReviewer 8F6o emphasized that our work \\u201caddresses the important challenge of auditing T2I models to assess their reliability, fairness, and bias\\u201d and appreciated the introduction of \\u201can interpretation of concept distributions, which forms the basis for their marginal and conditional distribution notations.\\u201d\\nReviewer 3MPH described our methods and metrics as \\u201csimple and intuitive,\\u201d adding that they contribute meaningfully to the evaluation landscape by reproducing findings of prior works.\\nReviewer eKz3 commended the framework\\u2019s robustness, noting that it \\u201cevaluates the relationship between prompts and generated images from different perspectives\\u201d through the proposed metrics.\\n\\n**2. Case Studies and Findings.**\\nThe significance of our case studies and findings, including the detection of biases and harmful content, was well received:\\nReviewer 8F6o praised how the case studies \\u201cexplore essential aspects of T2I model auditing,\\u201d particularly through analyses of bias, disability representation, and model misalignment.\\nReviewer 3MPH highlighted \\u201cimportant and worrying findings such as NSFW data in a human preferences dataset\\u201d and noted the framework\\u2019s ability to reproduce prior results.\\nReviewer bRZR described the findings as \\u201cinsightful and valuable,\\u201d providing \\u201ccritical observations that can guide future research and model development.\\u201d\\nReviewer eKz3 appreciated the relevance of the case studies, stating that \\u201cthe selected applications of the method as well as the results are very interesting and I hope will spark a discussion in the communities using the respective datasets.\\u201d\\n\\n**3. Tool and Accessibility.**\", \"the_interactive_tool_and_its_potential_for_broad_community_impact_were_also_recognized\": \"Reviewer 3MPH noted that \\u201copen-sourcing such a framework would be very useful for practitioners.\\u201d\\nReviewer bRZR described the tool\\u2019s contribution as particularly relevant for addressing \\u201charmful associations present in popular datasets.\\u201d\\nReviewer eKz3 highlighted its value in practice, remarking on its ability to uncover specific and interesting biases during their evaluation of the tool.\\n\\n**4. Presentation and Writing.**\", \"the_clarity_and_accessibility_of_the_paper_received_commendations_across_reviews\": \"Reviewer 3MPH called the paper \\u201cwell motivated and well written.\\u201d\\nReviewer eKz3 described it as \\u201cvery well written and easy to follow,\\u201d with sensitivity appropriately handled for complex topics.\\n\\n**4. Revisions and Impact.**\\nWe have addressed all reviewer comments, incorporating key clarifications, new cross-model analyses that have demonstrated the framework\\u2019s robustness across architectures, with findings further enriching the manuscript, and expanded discussions. These revisions, alongside updates to the tool, were well-received, with multiple reviewers expressing appreciation and raising their scores accordingly. Reviewer eKz3, for example, noted that the revisions confirmed \\u201cthe tool\\u2019s functionality and value,\\u201d leading to an increased score from 6 to 8.\\n\\n**Looking Ahead**\\n\\nOur work represents a step toward more transparent and interpretable evaluations of generative AI models. By empowering researchers and practitioners with tools and methodologies for understanding and mitigating biases, we hope to contribute to a more responsible and ethical use of generative technologies. We are thrilled by the reviewers' enthusiasm and the recognition of the impact of our work. Thank you for your careful consideration, and we look forward to further advancing these discussions within the community.\\n\\nSincerely,\\n\\n*The Authors*\"}", "{\"summary\": \"This paper systematically examines the associations between text prompts and generated image content in a human-understandable way. Its goal is to audit text-to-image models to ensure they produce desirable and task-appropriate images. The authors propose a framework called Concept2Concept that 1) extracts high-level concepts from generated images and 2) calculates concept distribution to uncover associations between prompts and generated images. Using this framework, the authors have identified potentially harmful associations in popular datasets like Pick-a-Pic.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provides insightful and valuable findings concerning harmful associations present in popular datasets, offering critical observations that can guide future research and model development. The topic itself is highly relevant, and the authors\\u2019 motivation is clearly articulated, underscoring the importance of addressing these issues.\", \"weaknesses\": \"One limitation of this paper is that the overall framework still relies on human examination and investigation, which may impact its scalability.\\n\\nThe technical and theoretical contributions are fair but could be strengthened. Further elaboration on the differences from existing work would help to clarify the novelty of this framework. As it stands, the paper resembles more of a technical application report than a traditional academic paper. To demonstrate the framework\\u2019s utility, the authors present five case studies that effectively showcase its application; however, they lack cross-model analysis, which would add depth to the evaluation. Using concepts as tools to analyze bias in text-to-image (T2I) models holds strong potential, and it would be beneficial for the analysis to extend into other domains, such as ethnicity, offering a more comprehensive evaluation across multiple models and datasets. The current five case studies, though useful, may fall short of meeting the quality criteria expected in a top conference.\\n\\nBesides, why there is no information about the used T2I model in the main paper? and in the appendix, there is no discussion about the choice of the model and no discussion about the different comparisons of different models \\n\\nAdditionally, there are minor typos (e.g., in Line 236, Figure 2) that could benefit from correction.\", \"questions\": \"Please refer to the Weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed response and for providing the Jupyter Notebook demo of the workflow and tool. I personally tested the demo by experimenting with various anchor concepts and exploring them in the concept2concept widget. Through this, I discovered specific and interesting biases (e.g., for the anchor concept \\\"playing,\\\" young people are depicted playing the violin or piano, while all older people are depicted playing chess), confirming the tool\\u2019s functionality and value.\\n\\nGiven the quality of the rebuttal and my positive experience with the demo, I am excited to support the acceptance of the paper and have raised my score to 8.\"}", "{\"title\": \"Clarifications\", \"comment\": \"Thank you for your comments! We would like to clarify a few points regarding your concerns, as we believe there may be some misunderstanding of the proposed framework\\u2019s functionality and scope.\\n\\n1. **The framework is not simply a visualization tool.** While the framework includes visualizations to represent conditional and marginal distributions, it is fundamentally a theoretical and practical method designed to enable users to characterize these distributions in terms of human-understandable concepts. The visualizations are a means to this end, providing interpretable representations of the conditional associations, rather than being the sole focus of the framework.\\n\\n2. **The framework supports both small- and large-scale analyses.** We demonstrated this capability across several case studies, ranging from small-scale examples with fixed prompts to **large-scale, real-world empirical distributions** such as the Pick-a-Pic case study. This case study uses a diverse set of user-generated prompts, showcasing how the framework scales effectively to real-world distributions. By leveraging concept-based analysis, the framework allows users to interpret these distributions systematically.\"}", "{\"summary\": \"This work introduces a framework for auditing T2I models and datasets. The framework uses discriminative models to find objects or concepts in generated images. Using the proposed method, the findings of several works that explore the biases of T2I models can be reproduced. Furthermore,\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is well motivated and well written\", \"The proposed methods and metrics are simple and intuitive\", \"It is nice that the paper reproduces the findings of prior works using a different evaluation framework\", \"The paper has some important and worrying finding such as NSFW data in a human preferences dataset\", \"Open sourcing such a framework would be very useful for practitioners\"], \"weaknesses\": [\"My main concern is that there is an existing work [1], that has not been acknowledged. It introduces a similar method that uses discriminative models to find co-occurences and biases in the generations T2I models, somewhat limiting the contributions of this work. Nonetheless, I think the other contributions and alaysis of this paper still have merit.\", \"The method section has too much fluff and introduces too many concepts that are not used later on. For example, the concept co-occurence formula is never being used, and the concept stability is never explored in the main part of the paper.\", \"Figure 3: Methodologically, it is not clear what the prompt revision means. Are some concepts used as negative prompts?\", \"[1] OpenBias: Open-set Bias Detection in Text-to-Image Generative Models, CVPR 2024\"], \"questions\": \"-\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to your point on scalability.\", \"comment\": \"We acknowledge the concern and would like to emphasize that our framework is intentionally designed to include human involvement, as this is a critical feature rather than a limitation. The system operates as a human-in-the-loop approach, which many studies have demonstrated to be both desirable and essential for building effective and safe systems, particularly in high-stakes contexts where automated decision-making cannot fully replace human oversight [1-8]. In applications such as policymaking, healthcare, or legal frameworks, the determination of what constitutes bias, harm, or unfairness is inherently context-dependent and often cannot be reliably automated. Human oversight ensures that nuanced, context-specific judgments\\u2014such as deciding whether specific concepts should be removed from a prompt dataset or adjusted when fine-tuning a T2I model\\u2014are made with care and accountability.\\n\\nWhile the framework is not designed to scale in the fully automated sense, it supports scalability in a targeted manner. Specifically, when automation of bias quantification is appropriate, our framework can seamlessly integrate metrics from complementary works. For instance, prior studies (e.g., TBYB, TIBET, etc.) have proposed metrics to quantify the skewness of concept distributions, enabling the computation of single-value measures for bias. These methods complement our framework, enabling scalable and automated assessments where suitable, while retaining human oversight in contexts that require nuanced, interpretive decision-making.\\n\\nOur primary goal is to provide researchers with tools that offer interpretable and actionable insights into generated content, ensuring that the system remains efficient and useful for exploring underlying distributions. This design is particularly valuable in scenarios where human decision-making\\u2014supported by AI\\u2014remains essential to guarantee the ethical and practical deployment of generative systems.\\n\\n[1] Towards Involving End-users in Interactive Human-in-the-loop AI Fairness\\n\\n[2] Principles of explanatory debugging to personalize interactive machine learning\\n\\n[3] Silva: Interactively Assessing Machine Learning Fairness Using Causality\\n\\n[4] Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-offs Across Multiple Objectives\\n\\n[5] D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias\\n\\n[6] Introduction to the special issue on human-centered machine learning (Fiebrink)\\n\\n[7] Power to the people: The role of humans in interactive machine learning (Amershi)\\n\\n[8] Human-centered machine learning (Gillies)\"}", "{\"title\": \"Response to your point on typos.\", \"comment\": \"Thank you for pointing these typos out. We have fixed them in the revised and reuploaded manuscript.\"}", "{\"title\": \"Response to your W2.\", \"comment\": \"Thank you for your comment. We will incorporate the information about the audited models into the main manuscript to make it clear that our results pertain to those architectures.\\nTo your second point, we agree that it would be interesting to study the behavior of closed-source models, especially given their strong safety fine-tuning. Your understanding is correct, our framework is indeed model-agnostic and could also be applied to closed-source models. However, there are several reasons we focused on open source models for this study. First, while closed-source models are indeed fine-tuned for safety, they are not the ones most widely used by individuals and organizations at scale. The cost barriers and restrictions associated with closed-source models often limit their adoption compared to open-source models like Stable Diffusion. This makes open-source models a more practical focus for understanding real-world misuse risks.\\n\\nTo address your comment, we explored the possibility of including closed-source models in our analysis but were unable to find any that provide free API access for large-scale evaluations, which aligns with their inherently restricted nature. \\nInstead, we performed an exploratory assessment using the free interface of ChatGPT. We tested the same prompts from our main paper:\\n\\n- \\\"Japanese redhead woman\\\"\\n- \\\"Woman reading a book in a bathtub\\\"\\n- \\\"A person with a disability\\\"\\n\\n\\nWhile the model appears to avoid generating explicitly sexual material, it still demonstrates other biases. For instance, it always associates disability with a wheelchair, even when the prompt does not specify this. We include these results in the appendix (last section B \\u201cDiscussion Period Figures\\u201d B.1) to demonstrate that our framework can also work on closed-source models. In the new figure 24, we are showing the top detected concepts for the images generated by ChatGPT (model 4o) using the disability prompt. The generated images are also shown in figure 25. \\nAligning with your intuition, this model is safety finetuned and does not generate sexually explicit content. However, our results suggest that while certain risks may be mitigated, underlying stereotypes and biases persist (e.g. wheelchair occurring with 100% frequency, man with 80% frequency, etc.), and that our method can be used to detect them.\"}", "{\"title\": \"Response to last comments after W3.\", \"comment\": \"Thank you for pointing these typos out. We have fixed them in the revised and reuploaded manuscript.\"}", "{\"title\": \"(continued from above)\", \"comment\": \"Third, the same set of fixed questions is applied regardless of the T2I model being audited, which may not accurately reflect the specific characteristics of the model under evaluation. Fixed questions also introduce computational overhead, as the number of questions directly scales with the number of forward passes through the visual question answering (VQA) model. To address these challenges, we designed a framework that is agnostic to the concept detector. While VQA models are effective for probing specific attributes (e.g., gender), relying solely on VQA models conditioned on outputs from a large language model (LLM) is unnecessarily restrictive and computationally inefficient.\\n\\nFourth, consider the set of detected biases\\u2014such as synthetic individuals where the VQA model identifies the concept \\\"woman.\\\" This concept is inherently vague, as it does not provide any insight into how \\\"woman\\\" is represented visually. This lack of clarity is precisely why we propose inspecting co-occurrences, which allows us to observe exactly what kinds of representations the model associated with the concept \\\"woman.\\\" Lastly, OpenBias lacks any mechanism for localizing or extracting this concept for visualization, as it relies solely on a VQA model.\"}", "{\"title\": \"Response to your main concern OpenBias related work.\", \"comment\": \"Thank you for recognizing the value of our contributions and analysis, and for bringing this related work to our attention. We have included this related work in the revised manuscript. Here, we will summarize the main advantages of our method as compared to this existing work, OpenBias. The main difference is that the OpenBias method first uses a large language model to make a set of bias proposals given some input prompts. The authors state \\u201cFor each caption in the dataset, we task a LLM with providing three outputs: the potential bias name, a set of classes associated with the bias, and a question to identify the bias.\\u201d The question generated in this step is then used for the VQA model in the detection phase. We see three main drawbacks with this method.\\nFirst, there is significant computational overhead associated with requiring an LLM for bias proposals. For every input prompt, OpenBias necessitates a forward pass through LLama2-7B. Scaling this process quickly becomes impractical\\u2014consider, for instance, our Pick-a-Pic experiment with 10 samples of 10,000 images.\\n\\nSecond, the LLM generates a limited set of bias-related questions based on the prompts, which may fail to capture the full range of details present in the generated images. This results in a fixed set of bias questions that remain the same for all T2I models. To illustrate, we took a few examples from our paper and followed their experimental setup (including system's prompt and using LLama2-7B chat):\", \"prompt\": \"\\u201cA photo of a young person jogging\\u201d\\n--\", \"llm_relevant_outputs\": \"\", \"bias_1\": \"Gender\", \"classes\": \"['City', 'Rural', 'Beach', 'Mountain']\", \"question\": \"Where is the image taken?\\n\\n\\nWhat is important here to note is that these questions and answers are used to prompt the VQA model to detect said biases. So, for the prompt \\u201cJapanese redhead woman\\u201d, the VQA model is prompted with \\u201cqs = f'Question: {What kind of clothing is the person in the image wearing?} Choices: {\\\", \\\".join(['Formal attire', 'Casual attire']\\n)}. Answer:'\\u201d. We saw in our manuscript how this is insufficient because the model is forced to choose between two answers that dont even apply to the image! \\n\\nIt is impossible to predict exactly what a text-to-image (T2I) model will generate for a given prompt without inspecting the actual output. Therefore, we argue that using fixed detection questions renders the system somewhat closed-set. Instead of restricting the analysis to rigid, pre-defined bias axes and answers, we propose examining the overall distribution of open-set concepts in the generated images. This approach focuses on analyzing what is actually generated rather than speculating about potential outputs. This perspective motivated our adoption of open-vocabulary object detectors like Florence 2.\", \"bias_2\": \"Age\", \"bias_3\": \"Race\", \"bias_4\": \"Occupation\", \"bias_5\": \"Clothing style\", \"bias_6\": \"Location\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for your thoughtful and detailed feedback throughout the review process! We greatly appreciate the time and effort you took to engage with our work, particularly in testing the interactive demo of the Concept2Concept widget. We are thrilled that the demo confirmed the tool\\u2019s functionality and aligned with the paper\\u2019s goals of facilitating human-in-the-loop auditing for T2I models.\\n\\nYour validation of the tool and its potential to empower researchers and practitioners is incredibly encouraging, and we are delighted to have your support for the paper\\u2019s acceptance. Your feedback has been invaluable in refining both the presentation and practical contributions of our work, and we are excited about its potential to spark further discussion and advancements in the field. Thank you again for your constructive engagement and for recognizing the contributions of this work!\"}", "{\"title\": \"Response to your Q4: \\\"validity, proximity, diversity in counterfactual work\\\"\", \"comment\": \"Thank you for your question. We appreciate the opportunity to clarify the differences between our proposed metrics (concept frequency, co-occurrence, and stability) and those commonly used in counterfactual explanation work, such as those outlined in [1].\\n\\nThe metrics introduced in [1]\\u2014proximity, validity, and diversity\\u2014are designed specifically to evaluate counterfactual explanations. The primary goal in [1] is to create counterfactual examples that help end-users understand model predictions by suggesting actionable changes (e.g., \\u201cyou would have received the loan if your income were $10,000 higher\\u201d). These metrics are summarized as follows:\\n\\n- Proximity: Measures how close a counterfactual example is to the original input, calculated as the mean feature-wise distance.\\n- Diversity: Assesses the variation among multiple counterfactual examples by calculating feature-wise distances between each pair of examples.\\n- Validity: Evaluates the fraction of counterfactual examples that successfully achieve a different prediction outcome compared to the original input.\\n\\nIn contrast, our work addresses a fundamentally different problem: characterizing the distribution of images in terms of human-interpretable concepts. Rather than focusing on counterfactual explanations for model predictions, we propose a framework for using detectors to extract concept labels and bounding boxes from images and then introduce metrics to simplify the exploration of the resulting concept distributions. Specifically:\\n\\n- Concept frequency: The empirical probability of a concept occurring across all generated images.\\n- Co-occurrence: The total number of times a pair of concepts appears together, highlighting relationships between concepts.\\n- Stability: Differentiates between persistent concepts (those that consistently appear regardless of prompt variations, indicated by low CV) and triggered concepts (those sensitive to specific prompts, indicated by high CV).\\n\\nWhile both frameworks involve metrics, their purposes and scopes are distinct. The counterfactual metrics in [1] aim to guide and evaluate the generation of examples for explaining model predictions. In contrast, our metrics focus on providing a structured understanding of the distribution and relationships of human-interpretable concepts within visual data.\\n\\nWe hope this clarifies the key distinctions and illustrates how our work addresses a different set of challenges. Thank you again for raising this point and giving us the opportunity to elaborate.\\n\\n[1] Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations\"}", "{\"title\": \"Response to your Q1 \\\"connection between marginal and conditional distributions and the summarization metrics\\\"\", \"comment\": \"Thank you for your question. The marginal and conditional distributions in Equation 3 serve as the theoretical foundation for the summarization metrics in Equations 4-6. Below, we clarify the conceptual and mathematical connections:\\n1. **Marginal and Conditional Distributions**:\\n - The marginal distribution $p(C)$ in Equation 3 aggregates concept probabilities across all prompts: \\n $$\\n p(C) = \\\\int p(C|t)p(t) \\\\, dt,\\n $$\\n while the conditional distribution $p(C|t)$ captures the likelihood of concepts given a specific prompt:\\n $$\\n p(C|t) = \\\\int p(C|x)p_G(x|t) \\\\, dx.\\n $$\\n\\nIn practice, $p(C|t)$ is empirically approximated by generating $K$ images per prompt and extracting detected concepts using an object detector. Similarly, $p(C)$ is approximated by aggregating over $N$ prompts.\\n \\n **Summarization Metrics**:\\n - **Concept Frequency (Equation 4)**: $P(c)$ estimates the marginal distribution $p(C)$ by calculating the proportion of images containing concept $c$, making it a discrete empirical estimator for the continuous $p(C)$.\\n - **Concept Stability (Equation 5)**: The coefficient of variation (CV) captures variability in the conditional distribution $p(C|t)$ across prompts.\\n - **Concept Co-occurrence (Equation 6)**: $P(c, c')$ estimates joint probabilities in $p(C)$ by counting co-occurrences of concepts $c$ and $c'$ across all images.\\n\\nIn summary, the summarization metrics provide interpretable, discrete approximations of the theoretically defined marginal and conditional distributions, enabling practical analysis of $p(C)$ and $p(C|t)$ in the context of T2I models. We will update the manuscript and make sure to clarify the relationships between the prompt distributions in equation 3 and the summarization metrics.\"}", "{\"title\": \"Response to your W3.\", \"comment\": \"We appreciate your thoughtful question and agree that the choice of detection model can influence the distribution of extracted concepts. While the quality of the detection model is an important consideration, the central focus of our framework is to empower human-in-the-loop auditing. The framework is designed not as a fully automated system but as a tool to assist humans in identifying and probing potential biases or errors. For example, even if a detector produces false positives or misses certain concepts, the framework provides bounding boxes and associated images that guide the human reviewer to investigate further. This ensures that the model\\u2019s outputs are not blindly accepted but critically examined in context, leveraging the detector as a signal for where to search and analyze.\\n\\nWhile different detection models may yield variations in extracted concepts, the human-centric design of our approach ensures that such differences can be identified, visualized, and checked during the auditing process. This flexibility underlines the framework\\u2019s core contribution\\u2014supporting human decision-making with AI assistance\\u2014rather than focusing on a specific detector or its engineering intricacies.\\n\\nFor our implementation, we selected Florence 2 due to its state-of-the-art generalist detection capabilities, open-set recognition, and strong localization features. These attributes make it versatile for the broad range of applications covered in our case studies. However, the framework is not restricted to Florence 2\\u2014users can substitute it with fine-tuned or specialist models based on their needs. The only functional constraints that we impose on the detector model are (1) open-set detection and (2) localization. For example:\\n- A specialist model might be preferable for detecting specific flower species in a nature-focused application.\\n- An NSFW detector could be used for explicit content moderation.\\n- Domain-specific models could support tasks in fields such as biomedicine or wildlife research.\\nEven the authors of Florence 2 have developed and released specialist versions of the model for such purposes, demonstrating the adaptability of our framework to application-specific requirements.\\n\\nRegarding the potential conflict between safety fine-tuning and identifying sensitive concepts (e.g., CSAM), we view this as part of the broader question of a model\\u2019s ability to detect certain types of content. While we do not expect a detection model like Florence 2 to explicitly detect CSAM per se, it is capable of identifying related concepts (such as \\u2018nude,\\u2019 \\u2018underwear,\\u2019 etc.) which may serve as proxies or indicators in specific contexts (which we argue, can only be identified through co-occurrences). This demonstrates the model\\u2019s ability to recognize sensitive or NSFW-related concepts broadly. Safety fine-tuning may introduce limitations when detecting sensitive or controversial content, but similar issues can also arise from out-of-distribution concepts or gaps in training data. Our framework addresses this challenge by allowing users to swap the detection model with one better suited to the task at hand, ensuring flexibility and adaptability for diverse applications.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to your Q3: \\\"uncertainty from object detectors\\\"\", \"comment\": \"Thank you for your question. We clarify the practical details of our statement:\\n\\n\\u201cWe note that our use of an object detector D can introduce uncertainty in the extracted concepts, $C_{i,k}$ (e.g., due to detection confidence levels or the probabilistic nature of the model). Thus, we consider $C_{i,k}$ as samples from a distribution $C_{i,k}$ \\u223c $p(C|x_{i,k})$. In the case that concepts are extracted deterministically from a given image $x_{i,k}$, $p(C|x_{i,k})$ is a delta distribution\\u201d \\n\\n\\nIn practice, the detector (e.g., Florence 2) generates outputs as part of its sequence-to-sequence framework. At each decoding step, the model computes a probability distribution over possible next tokens using the softmax of logits from the transformer decoder. Sampling can be performed from this probability distribution to generate non-deterministic outputs. However, in our framework, we apply greedy decoding, where the token with the highest probability is selected at each step. As a result, we rely on deterministic outputs, which simplifies the formalization of our method.\\nWe acknowledge the importance of additional analysis on detector over/under-confidence and view this as part of a broader discussion on detector capabilities. We have included a detailed discussion in our response to Reviewer #eKz3, W3.\"}", "{\"title\": \"Response to method section length and concept co-occurrence\", \"comment\": [\"Thank you for this valuable feedback. We would like to clarify that concept co-occurrence is a fundamental aspect of our method, as it provides deep insights into the relationships between generated concepts. It is extensively discussed and utilized throughout the paper, specifically in Section 4.3, Section 5.1, and Section 5.2. Additionally, it is illustrated in the following figures:\", \"Figure 2 (right) and Figure 3 (bottom)\", \"Figure 4: Accompanied by the caption, \\u201cCo-occurrences of concepts with the detected concepts \\u2018girl\\u2019 and \\u2018woman\\u2019 in 10 random samples of the Pick-A-Pic Dataset\\u2026\\u201d\", \"Figure 6: Demonstrates concepts identified through co-occurrence, such as \\u201cdreadlocks, beanie, mask, ski.\\u201d\", \"Regarding concept stability, we introduced it as an integral component of our method. However, due to space constraints, its detailed discussion and application were moved to the appendix. We emphasize that concept stability is a highly practical tool for identifying which output concepts are consistently triggered by specific input prompts (e.g., in counterfactual scenarios) and for assessing persistence across varying prompts. In Application 1, we explored three case studies using user-specified prompt distributions, and similar insights from those case studies could be derived using concept stability. These results are elaborated on in Figures 9 and 10 in the appendix.\"]}", "{\"title\": \"Response to your points on related works and cross-model analysis\", \"comment\": \"We appreciate your eedback and agree that further elaboration on the differences from existing work would enhance the clarity of our contributions. We give a much deeper discussion on drawbacks of one related work (OpenBias) in our response to reviewer #3MPH. OpenBias is very similar to the other mentioned methods (such as StableBias, TIBET) and thus this discussion broadly covers the related works. Please see our comments to reviewer #3MPH and #8F6o.\\n\\nRegarding the evaluation, we thank you for pointing out the need for cross-model analysis. To address this, we have incorporated two additional models into our experiments and conducted a comprehensive cross-model analysis, as detailed in the new results in the appendix (last section B \\u201cDiscussion Period Figures\\u201d) and are further discussed in detail in response to reviewer #eKz3 (comment: \\\"Response to your W2 (part 2):\\\"). This analysis evaluates the framework's performance across diverse model architectures, providing a more robust demonstration of its utility and generalizability. We believe this addition strengthens the empirical contributions of the paper and directly addresses your concern.\"}", "{\"title\": \"Response to information about the choice of T2I models and cross-model experiments (pt2) and including ethnicity\", \"comment\": \"Thank you for this insightful feedback. We acknowledge the importance of providing more clarity about the T2I models used and expanding the discussion to better address cross-model evaluation. Due to space constraints, details regarding the choice of T2I models, along with other hyper-parameters for each case study, were originally moved to the appendix. However, we recognize the need to make this information more accessible. In the revised manuscript, we will move the description of the T2I models used in each case study to the main paper.\\n\\nRegarding the choice of T2I models, these were dictated by the specific objectives of the case studies:\\n\\nApplication 1 focused on reproducing results from existing works for consistency. Therefore, we used the same model (Stable Diffusion 2.1) across all three case studies: StableBias, TBYB, and the Disability case study.\\nApplication 2 involved existing datasets generated with different models. The Pick-a-Pic dataset includes outputs from multiple T2I models such as Stable Diffusion, SDXL, and Dreamlike Photoreal, while the StableImageNet dataset was generated using Stable Diffusion 1.4. \\n\\nWe understand the importance of cross-model evaluation for a comprehensive analysis of T2I models and their biases. To address this concern, we conducted additional experiments with two newly released models: Lumina Next SFT [1] and Stable Diffusion 3 Medium. Both models currently lead the T2I leaderboards and have amassed thousands of downloads. Specifically, Stable Diffusion 3 Medium, released in July, recorded ~52K downloads last month alone [2]. By incorporating these two models into our analysis, we aim to address your concern regarding cross-model evaluation. The results from these experiments are in the appendix (last section B \\u201cDiscussion Period Figures\\u201d) and are further discussed in detail in response to reviewer #eKz3.\\n\\nFinally, we appreciate the suggestion to explore broader domains, such as ethnicity. While our current focus has been on demonstrating the framework's utility through five specific case studies, we agree that extending the analysis to additional domains would provide additional depth. Notably, even without explicitly probing for specific ethnic groups, the current framework already provides valuable insights into minority representation. For example, it identifies concepts related to hairstyles, such as \\u2018afro\\u2019 and \\u2018dreadlocks\\u2019, and captures co-occurrences that include certain ethnicities like Asian or African-American, which are part of Florence 2's training. These concepts are illustrated in Figure 1-6 in the main paper.\\n\\n\\n[1] Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers. Peng Gao et al. 2024. \\n\\n[2] Scaling Rectified Flow Transformers for High-Resolution Image Synthesis. Patrick Esser et al. 2024.\"}", "{\"title\": \"Re: Jupyter widget\", \"comment\": \"Thank you for your thoughtful feedback and for considering the impact of our work. We appreciate your willingness to reevaluate your score and value the constructive insights you\\u2019ve shared.\\n\\nWe\\u2019d like to address your concerns regarding the choice of a Jupyter widget as the interface for interaction. The primary advantage of using a widget is its seamless integration within the Jupyter ecosystem, which is already a cornerstone of machine learning research and development. By providing interactivity and visualizations directly within the Jupyter Lab environment, the advantages are threefold:\\n\\n1. Our tool integrates naturally into researchers' existing workflows. This eliminates the need for a separate, locally hosted web app, which could introduce additional setup complexity and detract from the tool\\u2019s accessibility.\\n\\n2. Secondly, hosting the tool directly in Jupyter Lab allows users to rapidly iterate through different prompts/datasets/models and see the changes immediately reflected in the widget in the same notebook.\\n\\n3. Finally, while the tool can still be installed and used in local Jupyter notebooks, the integration with web-based versions of Jupyter, such as Colab, allows researchers to quickly share their findings with others, thus democratizing the auditing process, and enabling greater collaboration on what may be sensitive or hurtful issues.\\n\\nTo address your specific point, our widget can now be initialized with a single function call, ensuring that it is straightforward and user-friendly. Furthermore, while it provides interactive visualizations, the widget goes beyond being a \\\"simple visualization tool\\\" by enabling users to actively engage with the T2I model to audit and explore outputs in real time. This interactivity is essential for uncovering the nuanced insights that emerge through the human-in-the-loop approach central to our framework.\\n\\nTo demonstrate our claims, we provide an example of our tool in a Jupyter notebook hosted in Colab [https://colab.research.google.com/drive/1k3StsQhXXgGAYCpXoSmK3o_CxfosAdoe?usp=sharing]. *Please scroll down through the notebook and through the widget itself*. This notebook provides an end-to-end walkthrough of widget installation (a standard pip command), data preparation (we provide a commented helper function for convenience), and widget usage (initialized with a single function call). The notebook exposes the data preparation function \\u2013 main() \\u2013 instead of wrapping it in the tool to provide greater transparency and flexibility for researchers to write their own custom prompts for auditing. \\n\\nWe wholeheartedly agree with your assessment that human interaction is key to maximizing the framework\\u2019s potential. The integration of the widget facilitates this interaction by lowering the barrier to experimentation and discovery. It empowers researchers to systematically explore and identify misalignments, enhancing both the interpretability and utility of the framework. \\n\\nWe are confident that the widget offers an effective user experience for auditing the T2I model, and we hope that our Colab hosted notebook demonstrates the ease with which users can get started with the tool and how it can be adapted to different usage scenarios. We would be happy to provide additional demonstrations or examples to showcase its functionality and usability.\\n\\nThank you again for engaging so deeply with our work and for your invaluable suggestions. We look forward to further improving our contribution based on this exchange.\"}", "{\"title\": \"Update on Demo\", \"comment\": \"Dear Reviewer bRZR,\\n\\nThank you for your feedback on our framework and for sharing your concerns about its automatic capabilities in detecting biased concept associations. We greatly value your insights and have worked to address the points you raised.\\n\\nYour primary concern was that the framework seemed to function primarily as a visualization tool, lacking automatic analysis. We are excited to share that we created an interactive widget that integrates seamlessly within a Jupyter notebook, aligning with the existing ML development ecosystem. While it provides interactive visualizations, the widget goes beyond being a \\\"simple visualization tool\\\" by enabling users to actively engage with the T2I model to audit and explore outputs in real time. This interactivity is essential for uncovering the nuanced insights that emerge through the human-in-the-loop approach central to our framework. \\n\\nTo demonstrate our claims, we provide an example of our tool in a Jupyter notebook hosted in Colab [https://colab.research.google.com/drive/1k3StsQhXXgGAYCpXoSmK3o_CxfosAdoe?usp=sharing]. *Please scroll down through the notebook and through the widget itself.* This tool not only enables interactive exploration but also includes functionality for identification of biases within the model. This is all possible due to our proposed theoretical framework for characterizing the conditional distributions using concepts.\\n\\nNotably, another reviewer (Reviewer eKz3) evaluated the tool, personally tried it, and successfully used it to identify their own set of biases. This enhanced experience directly addressed their initial concerns, and as a result, they raised their score to an 8. We hope this demonstrates the significant progress we\\u2019ve made in ensuring the framework goes beyond visualization to offer actionable insights.\\n\\nWe want to reiterate that the ability to detect biases is a core goal of our work. The interactive tool facilitates both automatic analysis and manual exploration, empowering researchers to uncover biased concept associations effectively. We believe we have addressed your concerns and demonstrated the impact of the framework. Given the enhancements and the tool's demonstrated capabilities, we hope you might reconsider your score to better reflect this. Thank you!\"}", "{\"title\": \"Response to your W2 (part 2): \\\"Did you perform any experiments with other model architectures...\\\"\", \"comment\": \"We appreciate your question and would like to clarify that Application 1 is not restricted to a single model architecture. Below, we provide a detailed explanation and address the specific concerns raised.\\n\\nModels Used in Application 1\\n---\\nCase Study 1 in Application 1 utilizes SDXL Lightning, which differs from earlier Stable Diffusion architectures. Additionally, we performed experiments with multiple models, including:\\n\\n- Stable Diffusion 1.4\\n- Stable Diffusion 2.1\\n- SDXL\\n- SDXL Lightning\\n- Dreamlike Photoreal\\n\\nNotably, SDXL introduces several architectural innovations compared to Stable Diffusion 1.4 and 2.1, such as a second text encoder, a refiner model, and different conditioning during training. Similarly, SDXL Lightning incorporates non-MSE distillation and an adversarial discriminator, further differentiating it from the earlier architectures.\\n\\nWhile Application 2 focuses on Pick-a-Pic, which consists of images generated from multiple models (e.g., SDXL, SD Lightning, Dreamlike Photoreal), Case Studies 2 and 3 in Application 1 aim to reproduce results from prior works (StableBias, TBYB, and Disability Qualitative Case Study). To ensure consistency with these studies, we used Stable Diffusion 2.1, the overlapping model across all three.\\n\\nNew Cross-Model Experiments\\n---\\nTo address your concern and further demonstrate the framework\\u2019s effectiveness across different architectures, we conducted additional experiments with two newly released models:\\n\\n- Lumina-Next SFT [1]\\n- Stable Diffusion 3 Medium [2]\\n\\nBoth models currently lead the T2I leaderboards and have amassed thousands of downloads. Specifically, Stable Diffusion 3 Medium, released in July, recorded ~52K downloads last month alone [2].\", \"since_the_concern_was_about_application_1\": \"we conducted new experiments for the case study 1 (toy) and 3 (disability) in application 1. In sections B.2.1-B.2.4, we show a subset of results chosen from the full results. Each set of results corresponds to a specific prompt and shows a random sample of 10 images from the larger generated sample of images. We show only a snippet of results in subsection B.2 in the appendix and summarize some notable differences across the models here.\\n\\nFirst, in B.2.1, Concept2Concept shows us that in over 60% of the images, Lumina has some notable difference in the concept lighting. The other 2 models report 0% for the concept lighting. When visualizing the images, we can clearly see that this concept is referencing the dark lighting. Second, Concept2Concept reports that almost 100% of images generated by SD2.1 and SD3 Medium contain wheelchair, while Lumina has 0% for wheelchair. Concept2Concept is also clearly demonstrating the model differences in terms of setting with the concepts: sky, sidewalk, street, window, etc. \\n\\nNext, in B.2.2, again Concept2Concept shows there is a lighting difference between the models\\u2019 outputs. Second, it shows precisely how each model represents the limb difference: Lumina, in the hand and fingers, and SD2.1 in the leg and foot, and SD3M in the arms and legs. In B2.3, Concept2Concept shows that the hand is clearly detected in SD2.1, the ear in SD2.1 and SD3M, and the hearing aid in SD3M. \\n\\nLastly, in B.2.4, there are many interesting differences in how the T2I models represent the concept \\u2018jogging\\u2019. First, SD3M represents jogging with girls and boys as quantified by Concept2Concept in figure 32. On the other hand, SD2.1 associates jogging with the concept woman. Lumina associates jogging with woman and girl. We also note the difference in attire related concepts. Lumina is the only model where bra occurs in the top detected concepts. When comparing the setting of the image, our method Concept2Concept reports that Lumina associates jogging with the concepts field and sun; SD2.1 associates jogging with a path. \\n\\nThese new experiments highlight our method's ability to pinpoint and quantify differences between models effectively. We have incorporated these two models into two new case studies, and their full results will be included in the revised manuscript.\\n\\n[1] Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers. Peng Gao et al. 2024. \\n[2] Scaling Rectified Flow Transformers for High-Resolution Image Synthesis. Patrick Esser et al. 2024.\"}", "{\"metareview\": \"This paper introduces a framework to audit T2I models and prompt datasets, which analyzes association between text prompts and generated images with interpretable tools. Four knowledgeable reviewers went over this submission. The reviewers recognized that the paper tackles an important topic (8F6o, bRZR), that the paper is well written and well motivated (3MPH, eKz3), and that the proposed methods and metrics are simple and intuitive (3MPH).\", \"the_reviewers_also_raised_concerns_about\": \"1. The connection between marginal and conditional distributions, and connection between continuous and discrete cases, which was not well established (8F6o)\\n2. The little discussion on the models used to address uncertainty: no empirical study to validate the captured uncertainty, nor about the sensitivity of results to the choice of detector model (8F6o, eKz3)\\n3. Missing discussion on related work: counterfactual explanations literature (8F6o) and OpenBias (3MPH), questioning the technical and theoretical novelty of the paper (bRZR)\\n4. Experiments not fully convincing (bRZR, eKz3): Cross-model analysis would be beneficial; extension to other domains such as ethnicity would be beneficial; validation across multiple models and datasets would solidify the experimental validation. The selection of T2I model should be justified.\\n5. The scalability of the method relying on human examination (bRZR)\\n6. The novelty being unclear and the utility of the proposed framework difficult to assess (bRZR, eKz3)\\n\\nThe rebuttal and discussion partially addressed the reviewers concerns. For example, the rebuttal established the missing connections pointed out by the reviewers, discussed the differences with the missing related work, showcased with a few qualitative examples the output of OpenBias, analyzed the output of additional models, and argued the importance of having humans in-the-loop. During the reviewers' discussion period, one reviewer tended towards acceptance, one reviewer towards rejection, and the two remaining reviewers positioned themselves as borderline. The AC went over the paper, and all the discussion materials. After careful considerations, the AC agrees with some of the concerns raised by the reviewers. In particular, the technical or experimental novelty remains unclear. Although the topic covered is very important and relevant to the community, it remains unclear why one should choose the proposed framework over prior work. To address this concern, it might be worth doing an in-depth benchmarking of the proposed approach vs. previous works (e.g. OpenBias and other cited works in the paper). This benchmarking should ideally go beyond a few prompt qualitative analysis and arguments highlighting differences (a user study could be considered). Alternatively, the authors could focus on better highlighting what are the novel findings enabled by the framework. The current case studies replicate known findings in the community about Internet crawled datasets (such as CSAM and NSFW content) or synthetic data (biases, misalignment, and overall problematic content similar to real datasets). For these reasons, the AC recommends to reject and encourages the authors to consider the feedback received to improve future iterations of their work.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}", "{\"comment\": \"Dear Authors, thank you for the detailed replies. Regarding your comments adressing the three weaknesses:\\n\\n**W1:** Thank you for providing the Google Colab notebook. However, the notebook does not enable me to evaluate the UX or the practical utility of the tool, as it only includes Python functions and their corresponding figure outputs. Given that the tool is advertised as a primary contribution of the paper\\u2014even highlighted in the abstract\\u2014I would expect at least a repository to allow for local deployment and testing. Currently, it is unclear whether the tool is a standalone, locally hosted web application or merely an interactive cell within a Jupyter notebook. If it is the latter, while practical, I would hesitate to classify it as a standalone \\\"tool.\\\".\\n\\n**W2:** The response answers my question sufficiently.\\n\\n**W3:** Thank you for your response, which satisfactorily addresses my question and highlights that the central focus of the framework is the human-in-the-loop approach. However, this emphasis reinforces my concern raised in W1: If the tool, and by extension the entire UI/UX process, is central to the framework's ability to uncover biases and potential model errors, it becomes critical to provide at least some version of the tool to test. Without this, the framework's integrity cannot be fully assessed, as it risks being incomplete without an understanding of the tool's functionality and contribution.\"}", "{\"comment\": \"Thank you for your thoughtful feedback, and we are pleased to see that many of your concerns have been addressed.\\n\\nWe would like to clarify that the Google Colab notebook we provided is *not* the interactive tool we describe in the paper. Instead, the Colab notebook is intended as a demonstration of the underlying functions and visualizations. In section 6 we mentioned that the interactive tool itself is a standalone application implemented as a Jupyter widget, designed to be hosted and run locally within a Jupyter Notebook environment. Screenshots of this application are provided in the appendix. However, we recognize that our description may have caused some confusion, and we are open to revising our wording to clarify that it is a \\\"Jupyter-based interactive application\\\" rather than a general \\\"tool.\\\" \\n\\nWe are exploring making a version accessible before the discussion period ends. \\n\\nWhile the interactive application is indeed a key part of our work, we would like to emphasize that it is only *one* component of our broader contributions. Specifically, our contributions are:\\n\\n1. A framework that enables users to characterize the conditional distributions of T2I model outputs using human-understandable concepts.\\n2. Case studies spanning diverse input distributions, real-world empirical datasets, prior works, and pedagogical examples.\\n3. Significant findings, including evidence of misaligned and harmful content (e.g., CSAM) in widely used datasets, which have broader implications for safety and ethics in generative models.\\n4. A standalone interactive application, which supports our framework by facilitating the human-in-the-loop analysis process.\\n\\nBy focusing on these distinct contributions, we hope to convey the holistic impact of our work beyond the application itself. That said, we will strive to make the interactive application as accessible and testable as possible within the constraints of the review process.\\n\\nWe appreciate your detailed feedback and will incorporate these insights to strengthen both our paper and the accessibility of our contributions.\"}", "{\"comment\": \"I appreciate the authors' detailed response, which has addressed most of my concerns. I have only a minor point regarding Q4 on \\\"validity, proximity, and diversity in counterfactual work.\\\" While I recognize that the proposed metrics operate in a different context than those used in counterfactual works, the broader relationship between them is evident. Including a brief note in the related work section or supplementary materials to clarify this distinction could help readers better understand the connection.\\n\\nGiven the author's response, I raise my score to 6 from 5.\"}", "{\"title\": \"Response to your Q2: bridging between continuous definitions and discrete metrics\", \"comment\": \"Thank you for your question. To bridge the continuous distributions ($p(C)$ and $p(C|t)$) with our discrete summarization metrics, we rely on empirical approximations. Specifically:\\n\\n1. **Empirical Approximation**: We sample $N$ prompts and $K$ images per prompt to approximate the continuous distributions:\\n $$\\n p(C) \\\\approx \\\\frac{1}{N} \\\\sum_{i=1}^N p(C|t_i), \\\\quad p(C|t) \\\\approx \\\\frac{1}{K} \\\\sum_{k=1}^K p(C|x_{i,k}),\\n $$\\n where $p(C|x_{i,k})$ reflects detections by the object detector.\\n\\n2. **Metrics as Estimators**:\\n - **Concept Frequency** estimates $p(C)$ as the proportion of images containing concept $c$\\n - **Concept Stability** uses the coefficient of variation (CV) to assess variability in $p(C|t_i)$, providing insight into concept consistency across prompts.\\n - **Concept Co-occurrence** estimates joint probabilities from co-occurrence counts.\\n\\nWhile $p(C)$ and $p(C|t)$ are formally continuous, our framework operates on their discrete empirical counterparts, ensuring practicality and interpretability. This approach balances theoretical grounding with the constraints of real-world data. Thank you for bringing this to our attention. We will update the manuscript and make sure to clarify that we use discrete approximations.\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thank you for the response!\\n\\nRegarding human involvement, I agree with the human-in-the-loop approach and don't require full automation without human supervision. However, I have concerns about the framework's automatic capability in detecting biased concept associations. The proposed framework appears to function more as a visualization tool, lacking automatic analysis and biased concept association detection.\\n\\nFor instance, the paper demonstrates results using a prompt template like \\\"A person with [sth] where [sth],\\\" generates images, calculates conditioned concept relations, and creates visualization figures. Humans must then examine these figures to identify biased associations. While this works adequately for small-scale analysis with few concepts and a single prompt template, it becomes problematic when examining thousands of prompt templates and hundreds of concept associations. The results become difficult to interpret and understand, requiring significant human effort. Even Figure 4 is challenging to interpret for biased associations. Adding a module for automatic examination and biased association detection would improve the framework.\\n\\nGiven these limitations, the technical and theoretical contributions remain limited, though I appreciate the additional experiments across multiple models.\"}", "{\"title\": \"Response to clarify prompt revision meaning\", \"comment\": \"We agree that the explanation of prompt revision is unclear and will address this in the revision. In Figure 3, our method provides a human-interpretable characterization of the distribution of concepts present when the original prompt was \\u201cA photo of a person with a disability.\\u201d This characterization enables users to modify the prompt by emphasizing certain concepts or attenuating others, effectively revising (or engineering) the prompt. For this, we employ the straightforward technique of positive and negative prompting.\\n\\nPositive and negative prompting allows users to customize the image generation process by including or excluding specific elements to achieve more precise, aligned, and desirable outputs. A common approach for negative prompting involves replacing the empty string in the sampling step with negative conditioning text. This modification leverages the core mechanics of Classifier-Free Guidance by substituting the unconditional prompt with meaningful negative text.\\n\\nFor instance, in Figure 3, Concept2Concept showed that the original prompt \\u201cA photo of a person with a disability\\u201d produced images where nearly 100% included a wheelchair, even though the user did not explicitly request it. This outcome may be undesirable for several reasons, such as perpetuating harmful stereotypes or lacking diversity. To address this, users can adjust the output by focusing on the people rather than the wheelchairs. In our example, using positive prompts like +=\\u2018person, face\\u2019 and negative prompts like -=\\u2018wheelchair, wheel\\u2019, users can guide the model to generate images with fewer wheelchairs while highlighting the desired aspects. This process is enabled by the concise and interpretable summaries provided by our method.\"}", "{\"summary\": \"The paper introduces Concept2Concept, a framework for auditing text-to-image models by analyzing the associations between generated images and prompts using interpretable concepts. It helps uncover biases, harmful content, and unexpected associations in models and datasets, demonstrated through various case studies.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is very well written and easy to follow. The authors handle the sensitive topics with the appropriate sense of responsibility.\", \"The selected applications of the method as well as the results are very interesting and I hope will spark a discussion in the communities using the respective datasets.\", \"The presented framework increases evaluation robustness due to the three presented metrics, evaluating the relationship between prompts and generated images from different perspectives.\"], \"weaknesses\": \"**W1:** To my knowledge, there is no anonymous code repository provided with the paper, neither for the experiments nor the tool. As a result, I am unable to comment on the tool's usefulness. It would be beneficial if the experiments could be replicated either within the tool or through a dedicated repository to also validate the correctness of the results.\", \"the_selection_and_robustness_of_the_results_regarding_the_t2i_and_vlm_detector_models_are_in_my_opinion_just_weakly_addressed\": \"**W2:** Only in the Appendix, it is revealed that the audited T2I model is Stable Diffusion 2.1. This information should be part of the main manuscript as the results of Study 4 only apply to this model architecture. It would be interesting how results would change for other model architectures, as especially closed-source models are strongly safety fine-tuned. If I understand correctly your framework is model-agnostic and could also be applied to closed-source models accessed via API calls. Did you perform any experiments with other model architectures? And if not please argue in the manuscript why you restrict Application 1 to this specific model architecture.\\n\\n**W3:** While the authors acknowledge that the detection model introduces uncertainty in the extracted concepts (Line 132), they do not address how sensitive the application results are to the choice of the detector model. Could specific concepts be overlooked if a different grounding model is used? Additionally, how does the safety fine-tuning of the detection model potentially conflict with the task of identifying sensitive concepts, such as in CSAM?\\n\\nI am open to raising my score if the identified weaknesses are either adequately addressed through revisions to the manuscript or convincingly argued to be non-issues.\", \"comments\": [\"There is a closed bracket missing in line 187 \\u201eP(c\\u201c\", \"There is a closed bracket too much in line 236 \\u201eFigure 2)\\u201c.\"], \"questions\": \"See weaknesses W1 to W3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to your W1.\", \"comment\": \"Thank you for highlighting the importance of providing access to the code and tool for reproducibility and validation. To address this, we are including a link to an anonymous self-contained notebook which can be run and accessed at https://colab.research.google.com/drive/1mqPyC_4ifM9jjM61Sn_CsnjUCMw_XxUk#scrollTo=76e26599-8f54-45be-9240-27a9ddf4e256. This repository enables the reproduction of our toy experiments and serves as the foundation for the tool.\\nThe tool itself is designed as an interactive extension of the codebase and integrates easily with Jupyter notebooks, allowing users to explore and analyze the results in a user-friendly environment. In our original submission, we included screenshots of the tool in the appendix. While the current version is focused on reproducibility for the review process, we plan to release the full code and tool publicly after the paper\\u2019s publication. \\nWe address your last comment in our response to your last comment (W3). Please see below.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your response. Personally, I remain skeptical that a Jupyter widget would be an effective interface for such interactions, especially when compared to, for instance, a locally hosted web app with a more comprehensive interface and no-code interactions. That said, I am open to reconsidering if the widget integrates easily into a notebook (e.g., with a single function call). If it can be demonstrated that the tool goes beyond a \\\"simple visualization tool\\\" and provides an effective UX for interacting with your T2I model for auditing, I would be willing to raise my score from 6 to 8.\\n\\nWhile I acknowledge the various contributions of your work, I see the application not as merely supporting the framework, but as an integral part of it, as human interaction is central to the framework\\u2019s functionality. This includes the case study and findings: these were not automatically computed but rather emerged through the use of the framework and human interaction. In my view, this interaction is key, as the best results of the framework could either be missed or misinterpreted depending on the nature of that interaction (see for example the comment of reviewer bRZR already questioning the interpretability of Figure 4.). \\n\\nAlthough the findings you discovered are both interesting and novel, I believe the tool is crucial for the long-term value of this work to the community. It will enable other researchers to uncover similar misalignments and effectively leverage your proposed framework.\"}" ] }
0RHMnPj8no
Improved Sample Complexity for Private Nonsmooth Nonconvex Optimization
[ "Guy Kornowski", "Daogao Liu", "Kunal Talwar" ]
We study differentially private (DP) optimization algorithms for stochastic and empirical objectives which are neither smooth nor convex, and propose methods that return a Goldstein-stationary point with sample complexity bounds that improve on existing works. We start by providing a single-pass $(\epsilon,\delta)$-DP algorithm that returns an $(\alpha,\beta)$-stationary point as long as the dataset is of size $\widetilde{\Omega}\left(1/\alpha\beta^{3}+d/\epsilon\alpha\beta^{2}+d^{3/4}/\epsilon^{1/2}\alpha\beta^{5/2}\right)$, which is $\Omega(\sqrt{d})$ times smaller than the algorithm of \citet{zhang2023private} for this task, where $d$ is the dimension. We then provide a multi-pass polynomial time algorithm which further improves the sample complexity to $\widetilde{\Omega}\left(d/\beta^2+d^{3/4}/\epsilon\alpha^{1/2}\beta^{3/2}\right)$, by designing a sample efficient ERM algorithm, and proving that Goldstein-stationary points generalize from the empirical loss to the population loss.
[ "Differential privacy", "nonconvex optimization", "nonsmooth optimization", "Goldstein stationarity" ]
Reject
https://openreview.net/pdf?id=0RHMnPj8no
https://openreview.net/forum?id=0RHMnPj8no
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qyPCRnc7Ln", "nf9WSFkgwC", "m9t4trOgOz", "kmLWWhqOO9", "dSZHqkLcnq", "d7LDcZxWVL", "d6CwLo8MoZ", "ViJhfBvrwd", "UODYU9cI1v", "SBZm3IsP64", "Qet3RSgpHH", "IGurMxANnN", "HY6y5NtcKj", "DilujwBiaC", "D3cSEBXLLz", "AkSEtfh20l", "9lKXxhBtsD", "89zVEKAwh7", "7RYILIXldb", "7BqRXnscjh", "3DsUV5gKB3", "0iTXDNsE3p" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1732124317507, 1732404496143, 1730671374719, 1732585874365, 1732405081388, 1734657676025, 1737524136264, 1732585802505, 1732124851803, 1732651756877, 1732124947973, 1732124656494, 1732406678366, 1732124423972, 1732406135947, 1732405693182, 1732607674613, 1732585125030, 1730546992205, 1730692285888, 1730677339446, 1732380853195 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Reviewer_hF98" ], [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Reviewer_hF98" ], [ "ICLR.cc/2025/Conference/Submission11633/Area_Chair_2UU6" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Reviewer_KZV5" ], [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Reviewer_hF98" ], [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Reviewer_WTuS" ], [ "ICLR.cc/2025/Conference/Submission11633/Authors" ], [ "ICLR.cc/2025/Conference/Submission11633/Reviewer_WTuS" ], [ "ICLR.cc/2025/Conference/Submission11633/Reviewer_KZV5" ], [ "ICLR.cc/2025/Conference/Submission11633/Reviewer_UGBN" ], [ "ICLR.cc/2025/Conference/Submission11633/Reviewer_hF98" ] ], "structured_content_str": [ "{\"title\": \"Response to reviewer KZV5\", \"comment\": \"We thank the reviewer for their time and effort, and are encouraged by their appreciation of our contribution and presentation.\\n\\nWe also thank the reviewer for their questions, and we will modify writing to better explain the points they have raised. We address them as follows:\\n\\n1. Indeed, while the single-pass algorithm uses each datapoint $\\\\xi_i$ once throughout its run, in each step it queries $m$ oracle calls in order to stabilize the response (hence reducing sensitivity), as seen for example in line 6 of Algorithm 3. In other words, the oracle complexity is m times the sample complexity, and hence it does not \\u201cbreak\\u201d known lower bounds.\\n\\n2. Yes, this is a good point: the O2NC itself is optimal, as it is matched by a (non-private) lower bound, as discussed in the original paper by Cutkosky, Mehta and Orbaona. The difficulty in having a matching lower bound lies in the private regime, for which even in the more well-studied smooth non-convex case there are currently gaps between known upper and lower bounds.\"}", "{\"comment\": \"Thank you for your question. If we do not employ the tree mechanism and instead add independent Gaussian noise (denoted as $\\\\zeta_t$) to privatize the gradient $g_t$ at each step, the noise in the cumulative gradient sum would grow as $\\\\sum \\\\zeta_t$, leading to a significant increase in total noise.\\n\\nAdditionally, if we were to use the true gradient $g_t$ directly when computing the cumulative gradient sum rather than the privatized gradient, we would effectively be reusing the true gradient multiple times. This would require adding larger Gaussian noise to account for composition effects, also impacting the utility of the gradient estimates.\\n\\nHence, the tree mechanism is an ideal choice in scenarios where no data is reused and the cumulative gradient sum is required.\"}", "{\"summary\": \"This paper explores differentially private (DP) optimization algorithms for stochastic and empirical objectives that are non-smooth and non-convex, presenting methods that achieve Goldstein-stationary points with improved sample complexity bounds compared to prior work. The authors introduce a single-pass ($\\\\epsilon$,$\\\\delta$)-DP algorithm capable of producing ($\\\\alpha$,$\\\\beta$)-stationary points. Subsequently, they propose a multi-pass, polynomial-time algorithm that further refines sample efficiency by designing an effective ERM algorithm and demonstrating that Goldstein-stationary points can generalize from the empirical to the population loss.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper studies an important problem in DP non-convex optimization, and achieves improved sample complexity bounds over existing works.\", \"weaknesses\": \"The presentation is at times unclear, leading to a disjointed reading experience.\\n\\nAdditionally, the paper offers limited technical innovation. Most of the algorithmic framework and techniques appear to be adapted from previous works.\", \"questions\": \"1. The tree mechanism in Algorithm 1 is hard to understand and seems logically inconsistent. Specifically, regarding the function NODE, what is its intended purpose? There appears to be no defined output for NODE. Moreover, in line 13, $k'$ is assigned a value greater than $k$, however, line 14 subsequently tests the condition $k'\\\\le k$, which can never be true. As a result, $S$ remains an empty set and is never updated.\\n\\n2. In the fourth line of Proposition 2.5, for calculating each $X_i$, should it instead use $\\\\sum_{j=1}^i M_j$ rather than the expression given in the paper $\\\\sum_{j=1}^i M_i$?\\n\\n3. In (Cutkoskyetal.,2023), $\\\\Delta_{t+1}$ is updated by $\\\\Delta_{t}+\\\\eta g$ (as stated in their Remark 10), while in Algorithm 2 line 8, $\\\\Delta_{t+1}$ is updated by $\\\\Delta_{t}-\\\\eta g$. Could you clarify the rationale behind this difference?\\n\\n4. In Theorem 3.1, while the sample complexity has been reduced by a factor of $\\\\Omega(\\\\sqrt{d})$ compared to (Zhang et al., 2024), this comes at the expense of increasing the number of random directions $m$ from $d$ to $d^2$, potentially resulting in a longer runtime.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you once again for your detailed comments. We would greatly appreciate knowing if our responses have adequately addressed your concerns and questions, and we would also be happy to engage in any further discussions if needed.\"}", "{\"comment\": \"Thank you for your response and detailed explanation. I would like to clarify further: do you maintain a separate tree mechanism for every $\\\\Sigma$ iterations for Algorithm 3?\"}", "{\"metareview\": \"This paper presents new single-pass and multi-pass algorithms for differentially private (DP) optimization of nonsmooth nonconvex. Authors provide new state-of-art art sample complexity for both these types of algorithms. Algorithm similar to Zhang et al., 2024, but authors provide a new analysis utilizing the smoothness of the randomly smoothed envelope to improve sensitivity of zeroth order gradient and therefore improve utility trade off of private optimization. However, many reviewers had concerns about presentation and novelty of the techniques used. Paper also do not provide experimental validation of their results.\", \"additional_comments_on_reviewer_discussion\": \"Authors provided more conceptual clarity, but they didn't improve the presentation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you once again for your detailed comments. We would greatly appreciate knowing if our responses have adequately addressed your concerns and questions, and we would also be happy to engage in any further discussions if needed.\"}", "{\"title\": \"Response to reviewer WTuS\", \"comment\": \"We thank the reviewer for their time, for appreciating the strengths of our work, and appreciate their constructive efforts in helping us strengthen it.\", \"we_address_the_raised_questions\": \"1. It is currently not known whether the result of this paper is tight, due to lacking lower bounds in the private regime. The non-private term is indeed optimal (as indicated by the lower bound by Cutkosky et al.), whereas the exact private terms are not even known in the well-studied smooth nonconvex case.\\n\\n2. As we discuss in the paper, we identify several challenges in order to improve previous results. The first of which is to decrease the sensitivity of the gradients estimator which allows adding less noise in order to privatize. Moreover, we then also further improve the results by utilizing the empirical objective for which we derive tighter guarantees, and prove that these also generalize to the population, which no previous work has done in this context. We further discuss these in the top common comment.\\n\\n3. Goldstein-stationarity has recently emerged as a popular framework for designing and analyzing optimization algorithms for nonsmooth nonconvex objectives, which is an important regime in modern deep learning applications. The reason for this is that function-value minimization as well as stationarity are impossible to efficiently achieve in the non-smooth non-convex setting, and Goldstein stationarity is the strongest known optimality condition that can be efficiently achieved. While many works studied this notion without privacy, only one previous paper studied this notion in the private setting. In our paper, we substantially improve upon these previous results, and along the way develop techniques (e.g., generalization of Goldstein-stationary points) that may find uses beyond private optimization.\\n\\n4. As pointed out throughout the paper (and also acknowledged by the reviewer, e.g. strengths #3 and #5), there are several novel aspects in our analysis that allow us to derive significantly better results than previous works on this subject. These include reducing the sensitivity of the gradient estimator, and also generalizing from the empirical loss to the population loss. Please note the top common comment for further discussion.\\n\\n5. In private stochastic optimization, the sample complexity consists of terms that depend on the privacy parameters, and terms that are independent of them (referred to as \\u201cnon-private\\u201d terms). We significantly improve both terms in our work, and emphasize that the non-private term we get is even better than previously thought to be possible (and in particular, optimal).\\n\\nGiven that the reviewer acknowledges the significant contributions of our work, we kindly ask them to consider reevaluating their rating in light of these clarifications. We hope we have been able to address the raised questions, and please let us know in case of any additional questions or feedback.\"}", "{\"comment\": \"Thanks for your response. I will keep my score.\"}", "{\"title\": \"Rebuttal: Common comment\", \"comment\": \"We thank the reviewers for their time and effort. We are encouraged by their appreciation of our contributions, as well as by their constructive efforts in helping us strengthen our work.\\nWe address all questions by responding directly to each review. Here we briefly reiterate our key responses.\\n\\n**Novelty**:\\n\\nSome reviewers asked about the novelty of our results and techniques, compared to previous results.\\nThe sample complexity we derive in this work is not only better than previous results, it was previously erroneously claimed to be impossible - so we believe that even the fact that we have been able to get to these results carries novelty.\\n\\nFurthermore, our analysis introduces several factors which were not used by previous works on this subject, which were also acknowledged by some reviewers.\\nFirst, by identifying a simple concentration argument for the gradient estimator, we reduce its sensitivity, already leading to a significant improvement over prior work. Moreover, to further improve the sample complexity, we analyze the empirical objective which was not considered by any prior work on DP nonsmooth nonconvex optimization, and prove that Goldstein-stationary points generalize from the empirical loss to the population. Not only does this generalization result lead to better sample complexity in the DP setting, we believe it is of independent interest in nonsmooth nonconvex optimization in general.\\n\\nWe would like to add that, in addition to the points above, it is indeed true that our work utilizes several well-studied techniques in DP optimization and in NSNC optimization. We believe that the fact that we are able to improve the sample complexity to the extent that we have (beyond what some thought possible), and the clarity in which we properly acknowledge the use of prior work in order to do so, is an advantage of our writing instead of a drawback. Throughout the paper we explain how we modify and build on top of previous algorithms in order to get these improved results, as generally acknowledged by the reviewers.\\n\\n**Presentation of the tree mechanism**:\\n \\nWe thank the reviewers that have noted that the presentation of the tree mechanism can be improved, including spotting typos in the index counters.\\nWhile this mechanism is standard in the DP literature and our use of it is relatively straightforward, we will properly revise its introduction and add intuition about its utility and corresponding guarantee for readers who are less familiar with it.\\n\\n**Efficiency**:\\n\\nSome reviewers pointed out that, as we discuss in the paper, the improvement in sample complexity is at the expense of increased oracle complexity. We would like to emphasize two related points:\\n- First-order algorithm with significantly better runtime: In Appendix C, we introduce a first-order algorithm with a substantially reduced oracle complexity (see Remark C.5). We believe the first-order analysis complements our zero-order results, and in particular addresses the discussed issue. We will make sure to further emphasize this in the main text, in which we chose to provide only our zero-order algorithms due to lack of space, and as they directly correspond to the previous work on this subject.\\n\\n- Trading off sample complexity and runtime: A key advantage of our analysis is that it easily allows trading-off sample vs. oracle complexity, which is controlled by the assignment of $m$ in the algorithm. Indeed, on one hand the oracle complexity clearly grows with $m$, while on the other hand the sensitivity bound we derive has an additional term which decays with $m$ (Lemma 3.3). We choose to assign $m$ large enough in order for this additional term to be negligible, hence reducing the sample complexity as small as possible, but this is not generally required by the analysis which enables trading them off smoothly.\"}", "{\"title\": \"Response to reviewer hF98\", \"comment\": \"We thank the reviewer for their time, and appreciate their constructive efforts in helping us strengthen our work.\\n\\nWe have attempted to address the novelty concerns in the common response above. Below we address the other questions raised by the reviewer:\\n\\n1,2.: Thank you for pointing out the typos in the tree mechanism section, we appreciate the reviewer\\u2019s detailed review. The condition in Algorithm 1 should indeed be $k' \\\\leq t$, and the expression in the fourth line of Proposition 2.5 should be $\\\\sum_{j=1}^i M_j$. We have corrected these errors in the revised version.\\n\\nThe tree mechanism approach ensures that the algorithm maintains strong differential privacy guarantees while minimizing the error introduced by noise. It has become a standard tool in differential privacy and in private optimization, and further details can be found in lecture notes and textbooks. To clarify the tree mechanism, we provide a brief and non-technical explanation:\\n\\nSuppose we are given a sequence of real numbers $X_1, X_2, \\\\ldots, X_n$, where each $X_i$ lies in $[0, 1]$, and we aim to compute the cumulative sums $\\\\sum_{j=1}^i X_j$ for all $i \\\\in [n]$ while preserving differential privacy.\\n\\nA naive approach would add independent Gaussian noise to each $X_i$, but this results in an error proportional to $\\\\sqrt{n}$, which grows poorly with $n$. The binary tree mechanism improves on this by organizing the computations into a hierarchical structure:\\n\\n\\n- Binary Tree Construction: We construct a complete binary tree with $n$ leaves, where each leaf corresponds to one $X_i$. Each internal node of the tree represents the sum of the values of its descendant nodes. For instance: A node spanning the range $(u, v)$ represents $\\\\sum_{j=u}^v X_j$; Its left child spans $(u, m)$, and its right child spans $(m+1, v)$, where $m$ is the midpoint of $[u, v]$.\\n\\n- Adding Independent Noise: To privatize the sums, independent Gaussian noise is added to the output of every node in the tree. This means that, for any node spanning a range $(u, v)$, the privatized sum is:\\n$\\n \\\\text{PrivSum}(u, v) = \\\\sum_{j=u}^v X_j + N_{u,v},\\n$\\n where $N_{u,v}$ is Gaussian noise with an appropriate scale determined by the privacy parameters.\\n\\n- Querying Cumulative Sums: To compute any cumulative sum $\\\\sum_{j=1}^i X_j$, the mechanism selects at most $\\\\log n$ nodes from the tree whose ranges cover $[1, i]$. The noisy outputs from these nodes are combined to produce the privatized result. This hierarchical structure ensures that the total error scales with $O(\\\\log^2 n)$, which is significantly smaller than $\\\\sqrt{n}$ for large $n$.\\n\\n- Role of $\\\\text{NODE}(t)$: In Algorithm 1, the function $\\\\text{NODE}(t)$ determines the set of nodes needed to cover the range $[1, t]$ in the tree in a greedy way. These nodes are then used to determine which Gaussians should be added to compute the privatized sum.\\n\\n\\n\\nWe hope this explanation clarifies the tree mechanism and its implementation. Please let us know if you have additional questions or feedback.\\n\\n\\n3. These are precisely the same update rules in disguise, since the authors therein \\u201csubtract\\u201d the update whereas we \\u201cadd\\u201d it, so they are defined as negations.\\nIn detail, in remark 10 therein, they query the gradient at $z_n:=x_n+(s_n-1)\\\\Delta_n$, and since $s_n\\\\sim[0,1]$, it holds that $(s_n-1)\\\\sim[-1,0]$, or in other words $z_n=x_n-s\\u2019_n\\\\Delta_n$ where $s\\u2019_n\\\\sim[0,1]$.\\n\\n4. This is true. We therefore complement our results by a first-order algorithm (in Appendix C) with a substantially reduced oracle complexity which avoids this blow-up - see Remark C.5, and also our top-comment for further discussion.\\n\\nWe kindly ask the reviewer to consider reevaluating the rating in light of these corrections and clarifications, as they address the key concerns they raised.\"}", "{\"comment\": \"Thank you for your thoughtful feedback and for taking the time to reassess our work. We look forward to incorporating your suggestions in the revised manuscript. If you have any other questions or comments, please let us know!\"}", "{\"title\": \"Response to reviewer UGBN\", \"comment\": \"We thank the reviewer for their time and effort, and are encouraged by their appreciation of our contribution.\", \"regarding_the_question_the_reviewer_raised\": \"In the ERM setting, we re-use samples, and therefore must use privacy composition theorems in order to argue about the privacy of the overall algorithm. Since Gaussian composition guarantees are simpler to apply than their counterparts for the tree mechanism, we choose to use the Gaussian mechanism to avoid further unnecessary complications. We thank the reviewer for raising this question, and we will clarify this in the revised version.\"}", "{\"comment\": \"Thanks for the clarification. I look forward to the revised manuscript\"}", "{\"comment\": \"Thank you for your timely response. Yes, we use a separate tree mechanism for every $\\\\Sigma$ iteration. A shared tree mechanism across iterations could risk privacy leakage and compromise the differential privacy guarantee. We will ensure to clarify and emphasize this point in the revised version.\"}", "{\"comment\": \"Thanks for your reply, I will keep my score.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you once again for your detailed comments. We would greatly appreciate knowing if our responses have adequately addressed your concerns and questions, and we would also be happy to engage in any further discussions if needed.\"}", "{\"summary\": \"This paper addresses differential privacy (DP) in nonsmooth, nonconvex optimization, aiming to improve sample complexity for finding Goldstein-stationary points in such challenging settings. Traditional DP optimization methods often assume convexity or smoothness, but this work proposes new algorithms that can handle nonsmooth nonconvex (NSNC) objectives.\\n### Key Contributions\\n\\n1. **Single-Pass Algorithm** \\n The authors present a single-pass (\\u03b5, \\u03b4)-DP algorithm that finds an (\\u03b1, \\u03b2)-stationary point with improved sample complexity. This algorithm reduces dimensional dependence by a factor of \\\\(\\\\Omega(\\\\sqrt{d})\\\\) over previous approaches, making DP optimization feasible in high-dimensional settings while maintaining privacy guarantees.\\n\\n2. **Multi-Pass Algorithm** \\n A multi-pass ERM-based algorithm further enhances sample efficiency, allowing the algorithm to iterate over the data multiple times and achieve sublinear dimension-dependent sample complexity. This approach improves convergence while satisfying DP constraints.\\n\\n3. **Generalization from ERM to Population Loss** \\n The authors establish that Goldstein-stationarity achieved on empirical loss also applies to the population loss with high probability. This result expands the utility of their approach by ensuring that empirical results generalize to the population.\\n\\n\\nThe proposed algorithms make notable progress in DP optimization for NSNC problems, improving sample efficiency while maintaining privacy. This advancement is valuable for practical applications where data privacy is essential, especially in high-dimensional machine learning settings. Additionally, the generalization result strengthens the applicability of Goldstein-stationary points beyond empirical settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Significant Improvement in Sample Complexity**\\n The paper offers a substantial reduction in sample complexity for differentially private (DP) nonsmooth nonconvex (NSNC) optimization. The single-pass algorithm achieves a lower dependence on dimension \\\\(d\\\\) compared to prior work, which is highly impactful for high-dimensional problems in machine learning.\\n\\n2. **Innovative Use of Goldstein-Stationarity** \\n By focusing on Goldstein-stationary points, the authors leverage a nuanced stationarity condition suitable for nonsmooth nonconvex optimization, allowing for more practical solutions where traditional gradient-based methods fall short. This approach builds on and expands the utility of Goldstein-stationarity in DP settings.\\n\\n3. **Generalization from Empirical to Population Loss** \\n The paper addresses a theoretical gap by proving that empirical guarantees of Goldstein-stationarity translate to the population loss. This generalization strengthens the theoretical foundation and practical relevance of the proposed algorithms, as it ensures that results on empirical data apply to broader distributions.\\n\\n4. **Applicability to Real-World DP Machine Learning Tasks** \\n The proposed algorithms are zero-order (using only function evaluations) and thus avoid the need for gradient information, making them suitable for a wider range of machine learning models that may have nonsmooth, nonconvex loss landscapes. This approach is particularly beneficial in privacy-sensitive applications like federated learning.\\n\\n5. **Novel Dimension-Independent Term** \\n The single-pass algorithm introduces a dimension-independent term in the \\\"non-private\\\" component of the sample complexity, challenging previous assumptions in DP optimization for NSNC objectives. This innovation indicates potential for further sample complexity improvements and opens new directions for DP research in nonconvex settings.\", \"weaknesses\": \"1. There is some typo, for instance, line 077 the last word should be perform.\\n\\n2. Randomized Smoothing is an ordinary technique used in this setting, and I wonder the novelty except for this to deal with the non-smooth setting.\", \"questions\": \"1. Is the result in the paper tight? In other words, is there a lower bound provided?\\n\\n2. What is the key challenge in improving the result by at least $\\\\(\\\\sqrt{d}\\\\)$? Specifically, how does this improvement compare to the results in the referenced work?\\n\\n3. What role does the (\\u03b1, \\u03b2)-Goldstein stationary point play in this paper?\\n\\n4. What is the novelty of this paper compared to previous works?\\n\\n5. Can you explain more about the result regarding the non-private term and private and how they contribute to the final result?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents algorithms to improve sample complexity in differentially private (DP) nonsmooth, nonconvex (NSNC) optimization. The authors propose two zero-order algorithms that improve the results over Zhang et al. 2024\\n1. Single-pass, sqrt(d) improvement over Zhang et al. 2024. The authors also establish a dimension-independent \\u201cnon-private\\u201d term, which is not known before for NSNC DP optimization.\\n2. A multi-pass algorithm further improves the sample complexity, yielding the first algorithm to preform private ERM with sublinear dimension-dependent sample complexity for NSNC objectives.\\nAdditionally, the authors show that Goldstein-stationarity generalizes from the ERM to the population.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-structured and easy to follow.\", \"The results improve over prior state of the art, establishing the first DP NSNC ERM algorithm with sublinear dim-dependent sample complexity. The non-private term is dimension-independent, which improves over the previous dimensional dependent result in Zhang et al. 2024.\"], \"weaknesses\": \"My main concern is about contextualizing the contribution:\\nLike Zhang et al. 2024, this paper also heavily relies on \\u201cOnline-to-Non-Convex conversion\\u201d (O2NC) of Cutkosky et al. (2023). The authors also mention that a lower bound is unknown, making it hard to assess the contribution beyond incremental improvement.\\n\\nA discussion of the Tree Mechanism is missing. It would be very hard for readers not familiar with the Tree Mechanism to understand.\", \"typo\": \"page 2 and in particular is the first algorithm to [preform] private ERM\", \"questions\": \"1. I am confused by the explanation of the improvement on the non-private term Remark 3.2. The authors explain that\\n> while the optimal zero-order oracle complexity is d/\\u03b1\\u03b2^3 (Kornowski & Shamir, 2024), and in particular must scale\\nwith the dimension (Duchi et al., 2015), the sample complexity might not.\\nSince the algorithm is one-pass, then the sample complexity would be worse than the oracle complexity?\\n2. Is Online-to-Non-Convex conversion optimal? (Related to the weakness above) If not, any algorithms based on it will be suboptimal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the problem of designing Differentially Private Algorithms for Nonsmooth Nonconvex Optimization. It specifically studies the zeroth order settings. Thanks to more careful analysis, the paper is able to improve on the dimension dependence of the previous results. It also extends the past result to the ERM settings.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is a fairly simple extension of previous results. The authors are able to improve the past sample complexity by an order of $O(\\\\sqrt{d})$ by using a high probability subgaussian bound on the sensitivity of the queries.\", \"The paper also extends the results to other settings.\", \"The generalization statement (Proposition 5.1) is a cool result to show the validity of the ERM approach for solving the Goldstein-stationary point.\", \"The paper is well written overall.\"], \"weaknesses\": [\"The paper is basically using the same algorithm proposed by [1]. This is not a huge issue since they are able to make some nice modifications to improve the sample complexity. However, this does limit the potential impact of the paper.\", \"I also think $m$ is quite large, which would make it really inefficient to run in practice. Currently, m can be something like $O(d^2T^{4/5})$, which is very hard to do in practice.\", \"It would be interesting if there were some matching upper bounds.\", \"[1] Zhang, Qinzi, Hoang Tran, and Ashok Cutkosky. \\\"Private zeroth-order nonsmooth nonconvex optimization.\\\" arXiv preprint arXiv:2406.19579 (2024).\"], \"questions\": [\"Why did the authors decide to use Gaussian Mechanism instead of the tree mechanism for ERM?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Re: Response to reviewer hF98\", \"comment\": \"Thank you for the clarification, particularly regarding the explanation of the tree mechanism. I have a follow-up question about its use in Algorithm 3. In each iteration $t$, the algorithm samples a minibatch $S_t$ from the unused data and computes the gradient $g_t$ based solely on $S_t$. Since each data sample is used only once and never revisited, what is the rationale behind employing the tree mechanism, which introduces noise to the cumulative gradient sum rather than to each instantaneous gradient?\"}" ] }
0R8JUzjSdq
LEMMA-RCA: A Large Multi-modal Multi-domain Dataset for Root Cause Analysis
[ "Lecheng Zheng", "Zhengzhang Chen", "Dongjie Wang", "Chengyuan Deng", "Reon Matsuoka", "Haifeng Chen" ]
Root cause analysis (RCA) is crucial for enhancing the reliability and performance of complex systems. However, progress in this field has been hindered by the lack of large-scale, open-source datasets tailored for RCA. To bridge this gap, we introduce LEMMA-RCA, a large dataset designed for diverse RCA tasks across multiple domains and modalities. LEMMA-RCA features various real-world fault scenarios from Information Technology (IT) and Operational Technology (OT), encompassing microservices, water distribution, and water treatment systems, with hundreds of system entities involved. We evaluate the performance of fourteen baseline methods on LEMMA-RCA across various settings, including offline and online modes, as well as single and multi-modal configurations The dataset is publicly available at https://lemma-rca.github.io/.
[ "root cause analysis", "multi-modal learning", "microservice systems", "benchmark data" ]
Reject
https://openreview.net/pdf?id=0R8JUzjSdq
https://openreview.net/forum?id=0R8JUzjSdq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ymSl9ydh4l", "yK2cYK5yJq", "vMBbLqifsq", "vKiVKuDPfe", "ulBW7waORW", "uhpzDB2gg7", "scLwJLixZB", "s31PY4a0Ac", "qxnDVPoVjB", "q5ynG2G0dH", "p7LA3KsxmU", "n5dbAmJyTo", "krvIfDJc0O", "hspRNSLYT7", "bV9Ns9izqf", "ZZF3wSGh1T", "ZYVOKR6JqD", "YDQIhyDPlC", "XwxRHljwTh", "Xklb6jh1Ut", "V6oImRwrF2", "Ryr0Rabt67", "RqVFV1YklL", "R8oog7suBY", "Q4dXpn0VlH", "PVy62rTw6K", "P65Ihk0JEu", "Lotn3wXVUm", "LNUaTiw0lA", "Isc3rEFKG2", "IXM5n03JUg", "IX55EBTTHv", "IUtcyWTzZh", "DPZvJWz22o", "BGxaO3VP3c", "BBP1x0mVkC", "ArcwQMeyeE", "4tPtzY5wVb", "4pAoXQeGv7", "492GJcqrsF", "3B272XGZ3C", "39nk8hYcmu", "1BBBEyOgzm" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732268786693, 1732503057267, 1731982122127, 1731980430605, 1732283936396, 1732263907379, 1734668844766, 1732194280051, 1737523874799, 1732543071858, 1732586727953, 1730600515265, 1731988388439, 1732194458363, 1731978799219, 1732908028357, 1732597645640, 1732566725564, 1731983446608, 1732588786625, 1731980468163, 1732266249736, 1732374509644, 1732907892854, 1731978754474, 1729910722237, 1732907868526, 1731982150075, 1732526995765, 1733021537242, 1731980506972, 1732256075404, 1733151706098, 1732550626280, 1732064439665, 1732544678465, 1731983544158, 1730713814681, 1732192312508, 1731983592611, 1732373834328, 1732566606981, 1730476039947 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_yFv3" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_E7LX" ], [ "ICLR.cc/2025/Conference/Submission7917/Area_Chair_ScaX" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_BP9D" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_UCbX" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_yFv3" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_BP9D" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_E7LX" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_E7LX" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_yFv3" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_BP9D" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_UCbX" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_yFv3" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_E7LX" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_UCbX" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Authors" ], [ "ICLR.cc/2025/Conference/Submission7917/Reviewer_UCbX" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the response.\", \"comment\": \"1. \\\"While REASON performs well in some metrics, it is still far from achieving optimal performance\\\"\\nFor Product Review datasets, Reason achieves 100% on PR@5 and 95% an MAP@5 with solely metric data. I think this is definitely not far from optimal.\\n\\n2.\\\"The system faults in the train ticket dataset are entirely simulated\\\"\\nWhat do you mean by entirely simulated? Train ticket project do have a deployment system, where faults can be injected and data can be collected as the proposed Product Review datasets. I am interested in the difference on injected system faults.\"}", "{\"title\": \"Follow-Up on Feedback Before Discussion Phase Ends\", \"comment\": \"Dear Reviewer yFv3,\\n\\nThank you for your valuable feedback on our paper. As the ICLR public discussion phase is ending soon, we would like to confirm if our responses have fully addressed your concerns. If there are any remaining issues, we\\u2019d be happy to provide further clarifications.\\n\\nIf you feel that all concerns have been resolved, we hope this could be reflected in your evaluation.\\n\\nWe sincerely appreciate your time and thoughtful input!\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Dear reviewer UCbX:\\n\\nThank you for your invaluable feedback. We would like to address your primary concerns and provide corresponding responses below. \\n\\\\\\n\\\\\\n**Response to: Should provide more valuable data rather than simply assembling data. The quality of the data is not clarified and the data collection period is short, difficult to see if the dataset captures a wide range of fault patterns...**\\n\\\\\\n\\\\\", \"a\": [\"Thank you for raising the issues regarding figure readability and consistency in reporting experimental results. We have made the following updates to address your concerns:\", \"1. Improved Figure Readability:\", \"We have updated the figures to a **vectorized format** for better clarity and resolution.\", \"Figure 2(a)(b) and Figure 4(a)(b) have been enhanced to improve their legibility.\", \"To further ensure readability, we have included these figures in a **single-column layout** in Appendix G, making them easier to interpret.\", \"2. Consistency in Reporting Experimental Results:\", \"We have reviewed and updated the experimental tables **(Table 3, 4, 5, and 6)** to ensure consistency in the number of decimal places across all reported results. All results are now presented to three decimal places for uniformity.\", \"Regarding standard deviation, it is generally not reported in root cause analysis tasks as the results focus on deterministic evaluations of methods rather than stochastic variability.\", \"We appreciate your constructive feedback and hope these updates address your concerns effectively. Please let us know if there are additional areas where further improvements can be made.\"]}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Dear reviewer yFv3\\uff1a\\n\\nThank you for your invaluable feedback. We would like to address your primary concerns and provide corresponding responses below. \\n\\\\\\n\\\\\\n**Response to: It is surprising that many faults happened in 8 days. It seems that these faults are simulated to mimic the real world scenario. **\\n\\\\\\n\\\\\", \"a\": \"Thank you for your feedback. We would like to clarify our contribution regarding the SWaT and WADI datasets and their relevance to our work.\\n\\n1. Transforming SWaT and WADI for Root Cause Analysis:\\n\\nWhile SWaT and WADI were originally used for anomaly detection tasks, we are **the first to transform these datasets into a root cause analysis (RCA) task**. This transformation includes preprocessing and adapting the data to create meaningful continuous labels or construct KPIs necessary for RCA. These continuous labels enable more granular evaluations of causality and root cause patterns compared to the discrete labels used in their original anomaly detection context..\\n\\n2. Sharing the Transformed Data:\\n\\nThe transformed SWaT and WADI datasets have been shared with researchers via private communications and used in subsequent works, including studies such as REASON (Wang et al., KDD 2023). While REASON evaluates these datasets for RCA, it is based on our preprocessing and transformation of the data, making it possible to use SWaT and WADI in this context.\\n\\n3. Preprocessing Code and Accessibility:\\n\\nTo ensure transparency and reproducibility, we have provided the source code for preprocessing SWaT and WADI datasets. This allows other researchers to validate our approach and further explore these datasets for root cause analysis.\\n\\n4. Contribution as a Dataset Paper:\\n\\nWhile SWaT and WADI are existing datasets, our contribution lies in adapting them for a new task (RCA), enhancing their utility for the research community. This transformation aligns with the goals of a dataset paper, as it extends the applicability of well-known datasets into a new domain.\\n\\nWe hope this explanation clarifies the necessity and significance of our preprocessing approach and our contribution to enabling RCA research using SWaT and WADI. If further details are required, we are happy to provide additional explanations.\", \"we_argue_that_the_two_day_data_collection_window_is_sufficient_to_capture_comprehensive_system_behaviors_for_the_following_reasons\": \"- Focus on Malfunction Patterns: Collecting additional data over longer periods, especially for normal patterns, would be unnecessary, as our primary focus is on malfunction patterns caused by faults.\\n- Duration of Malfunction Patterns: Malfunction patterns lasting several hours to a day are typically sufficient to reveal the malfunction behaviors associated with a system fault. This is consistent with real-world scenarios where identifying faults promptly is critical to avoid significant financial losses (e.g., millions of dollars in e-commerce platforms).\\nWe hope this explanation addresses your concerns regarding the data collection process and the nature of the faults. If further clarification is needed, we would be happy to provide additional details.\\n\\nWe hope this explanation addresses your concerns regarding the data collection process and the nature of the faults. If further clarification is needed, we would be happy to provide additional details.\\n\\\\\\n\\\\\\n**Response to: SWaT and WADI are from existing work. SWat and WADI are already evaluated for RCA in the REASON (Wang et al. KDD2023) paper. Since this is a dataset paper, including existing datasets into the proposed one should not be seen as the contribution.**\\n\\\\\\n\\\\\"}", "{\"title\": \"Clarifying the Relevance of Our Work to the ICLR Main Track\", \"comment\": \"Thank you for your thoughtful feedback and for increasing your score. We would like to address your concern regarding the relevance of this work to the ICLR main track. While ICLR does not have a specific \\\"Datasets and Benchmarks\\\" track like NeurIPS, **the conference does feature a primary area in the main track titled \\\"Datasets and Benchmarks.\\\"** This demonstrates that the conference recognizes and values contributions in this area as part of its scope. We hope this clarification highlights the paper\\u2019s alignment with the main track and encourages your continued support.\\n\\n**We greatly appreciate your time and thoughtful consideration.**\"}", "{\"title\": \"Thanks and no further questions.\", \"comment\": \"Thanks authors for the prompt response. I have no further questions.\\nI have increased my score to 5.\\nMy last concern is relevance of this work to the iclr main track. Had this been a datasets track I would have strongly championed for the paper. But I would like to depend on other reviewers assessment for this point. I will again update the score if needed.\"}", "{\"metareview\": \"The paper contributes\\n(1) large dataset for root cause analysis\\n(2) 14 baselines evaluated .\\n\\nThe reviewers feel that if ICLR had a separate Dataset track this would be a sure accept. \\nThe methodological contributions are modest and hence it is not clear on how this paper will stand with other ICLR papers which have more methodological contributions. At this point the paper is, at best, borderline.\", \"additional_comments_on_reviewer_discussion\": \"The points of discussion were\\n1. Relevance to main track of ICLR\\n2. Clarification of Baselines\\n3. Lack of Error bars\\n\\nThe authors responded to all them.\"}", "{\"title\": \"We Would Greatly Appreciate Further Details Regarding Dataset Evaluation Concern\", \"comment\": \"Dear Reviewer UCbX,\\n\\nThank you for your feedback. We would greatly appreciate it if you could provide more details on why you believe the dataset may not accurately assess the algorithms. This will help us address your concerns more thoroughly. If there is anything unclear in our previous response to any question, please let us know, and we will gladly provide further clarification.\\n\\nThank you again for your time and valuable input.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"My question on hyperparameter tuning is not well addressed. Have you tuned the hyper-parameters for each baseline?\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you to the author for addressing the issues I mentioned above; however, I still have some questions.\\n\\n> We have reviewed and updated the experimental tables (Table 3, 4, 5, and 6) to ensure consistency in the number of decimal places across all reported results. All results are now presented to three decimal places for uniformity.\\n\\nFirstly, the experimental results you presented are still not standardized. For example, in Table 3, row 452 uses percentages, while other rows presented as three decimal place; results in part of row 423 are still not standardized.\\n\\n> Regarding standard deviation, it is generally not reported in root cause analysis tasks as the results focus on deterministic evaluations of methods rather than stochastic variability.\\n\\nSecondly, the lack of reported standard deviations undermines the credibility of the experimental results, which are unreliable.\\n\\nFurthermore, your analysis indicates that your task data is time-series data; however, algorithms such as PC, Notears, and CORAL are not designed for time-series data. Therefore, comparing them on your dataset is not meaningful. You should consider incorporating more algorithms specifically designed for time-series data.\"}", "{\"summary\": \"In this paper, the authors proposed a new dataset with both metrics and log collected for the root cause analysis task. In addition, 8 existing RCA methods are evaluated on this dataset. The proposed datasets could be a good addition for evaluation of RCA methods for later research. However, it is not very clear what the benefit of including log modal data is. Existing methods work quite well on these datasets with only metrics modal.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. New multi-modal datasets are collected for RCA problem.\\n2. Eight existing RCA methods are evaluated on the proposed datasets.\", \"weaknesses\": \"1. The description of the data collection is insufficient. See Q1.\\n2. Some subsets of the datasets are from existing work and have been evaluated before. They should not be seen as the contribution of this work. See Q2.\\n3. The proposed IT ops datasets seems to be less challenging for existing works. See Q3.\", \"questions\": \"1. The authors claimed that the proposed datasets contain real system faults. If I understand correctly, the authors developed two microservice platforms and deployed them in production and collected the real system faults for 8 days when users are using these platforms. It is a bit surprising that so many faults happened in 8 days. Moreover, in the faults description section, it seems that these faults are simulated (e.g., External Storage Failure) to mimic the real world scenario. Could the authors clarify this?\\n2. It seems that SWaT and WADI are from existing work. The authors applied some anomaly detection algorithms on them to transform discrete labels into continuous ones. It is not clear why this is necessary. Moreover, SWat and WADI are already evaluated for RCA in the REASON (Wang et al. KDD2023) paper. Since this is a dataset paper, including existing datasets into the proposed one should not be seen as the contribution. \\n3. From experiments on existing methods, it seems that two IT ops datasets are not very challenging. For instance, REASON performs quite well in terms of PR@k, MRR and MAP@k on both of them with only the metrics data. What is the difference between the proposed datasets compared with existing ones, e.g., AIOps data in REASON and the popular train ticket datasets? When new datasets are proposed, they are expected to be more challenging, where current methods are failed on them. If current method can handle the proposed well with only metric modal, what is the meaning of including log modal?\\n4. The authors conducted some preprocessing to convert logs to time series for evaluation. But the open-sourced datasets do contain all the original logs, right?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for their response and for addressing my questions. I appreciate the efforts made. Here are my revised comments:\\n\\n1. **Issue of Overstatement:** The statement \\u201cWe evaluate the quality of LEMMA-RCA by testing the performance of eight baselines\\u201d seems to overstate the implications of the baseline tests. While I acknowledge the dataset\\u2019s quality through expert inspection and validation, I don\\u2019t think the baseline comparisons alone can determine the relative quality of datasets. I recommend revising similar statements.\\n2. **Baseline Implementation Details:** I suggest including the key parameters used for the baselines in the appendix, as these are crucial for replicating the results and understanding the methods. It would be informative to know if parameter tuning was performed for each baseline. Additionally, incorporating a discussion on the sensitivity to hyper-parameter tuning in a dedicated column could enhance the clarity and completeness of the comparison.\"}", "{\"title\": \"Thanks for the response.\", \"comment\": \"Thank you, authors, for answering most of my questions. I have some follow-up questions\\n\\n1. Could you please point me to where I can find the partial dependency graphs in the public datasets, if they are already released.\\n2. The graph discivery experiments conducted during the rebuttal are valuable because they attribute the failure of several causal baselines to the failure in recovering the true dependency graph. So, I suggest that you add these tables at least in the Appendix and discuss about the results in the main paper.\\n3. I would like the authors to mention the missing dependency graph as a specific limitation. Perhaps you can add this in Table 1. That said, despite this small detail, I do agree that this large scale multi modal dataset is a valuable asset to the RCA community.\\n4. Please add additional baselines compared during rebuttal in the main paper.\\n5. A minor clarification: \\\"For each system fault, we computed the metrics individually and then averaged the results across four cases.\\\" -- Do you mean that you computed the causal graph separately for each test case? Also, which four cases?\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**Response to: Choice of Baseline Algorithms**\\n\\\\\\n\\\\\", \"a\": \"Thank you for your suggestion to include additional baseline algorithms.\\n\\nAs per your recommendation, we have added BARO [1], a customized root cause analysis method, and PCMCI [2], a time-series causal discovery method, to our experiments using the Product Review sub-dataset.\\n\\nThe experimental results are summarized in the table below.\\n- BARO: This method demonstrates consistent performance across all metrics, highlighting its robust design for root cause analysis tasks.\\n- PCMCI: While PCMCI performs well on PR@5 and PR@10, it struggles with PR@1 and MRR, likely due to its primary focus on time-series data, which does not align fully with certain aspects of our dataset.\\n\\nThese results underscore the versatility of our dataset in supporting a diverse range of root cause approaches and emphasize its significance for advancing root cause analysis research.\\n| Method | PR@1 | PR@5 | PR@10 | MRR | MAP@3 | MAP@5 | MAP@10 |\\n|--------|------|------|-------|--------|-------|-------|--------|\\n| BARO | 50% | 50% | 50% | 50% | 50% | 50% | 50% |\\n| PCMCI | 25% | 50% | 50% | 34.16% | 25% | 30% | 40% |\\n\\nIf there are other specific algorithms you would like us to consider or further aspects of the experimental results to clarify, we are happy to incorporate additional analyses.\\n\\n[1] Pham, L., Ha, H., & Zhang, H. (2024). Baro: Robust root cause analysis for microservices via multivariate bayesian online change point detection. Proceedings of the ACM on Software Engineering, 1(FSE), 2214-2237.\\n\\n[2] Runge, J., Nowack, P., Kretschmer, M., Flaxman, S., & Sejdinovic, D. (2019). Detecting and quantifying causal associations in large nonlinear time series datasets. Science advances, 5(11), eaau4996.\"}", "{\"title\": \"Follow-Up on Feedback Before Discussion Phase Ends\", \"comment\": \"Dear Reviewer BP9D,\\n\\nThank you for your valuable feedback on our paper. As the ICLR public discussion phase is ending in a few days, we would like to confirm if our responses have fully addressed your concerns. If there are any remaining issues, we\\u2019d be happy to provide further clarifications.\\n\\nIf you feel that all concerns have been resolved, we hope this could be reflected in your evaluation.\\n\\nWe sincerely appreciate your time and thoughtful input!\"}", "{\"title\": \"Thank you authors\", \"comment\": \"Thank you for your active engagement during the rebuttal and for promptly addressing the queries. I have one final request:\\n\\nWhile these datasets are valuable for RCD tasks, I also see their utility in temporal causal discovery tasks, an area currently lacking robust datasets. Could you consider releasing the partial graphs as an adjacency matrix or a CSV file, formatted as used in your implementation? Additionally, I would appreciate it if you could release the code for the causal discovery experiments conducted in the paper. This is not urgent, as I understand it might require some time for cleanup.\\n\\nWhile I agree that your paper makes a **strong** contribution to the *datasets* aspect, I share the sentiment of other reviewers that the *benchmarking* section needs an appropriate selection of baselines, and a more thorough presentation of the results. \\n\\nI will retain my score for now.\"}", "{\"title\": \"Reply to Reviewer UCbX\", \"comment\": \"**Response to: You did not address my concerns regarding the multi-domain aspect you proposed, such as analyzing the relationship between the two domains and whether it is necessary to combine the OT and IT datasets.**\", \"a\": \"We appreciate the reviewer\\u2019s concern and would like to clarify our approach and rationale regarding the multi-domain aspect of our dataset. Below are our responses:\\n\\n1. **Diagnostic Nature of RCA:**\\n- Root Cause Analysis (RCA) is fundamentally a diagnostic or post-analysis task. This nature makes RCA inherently a case-by-case process, where each fault is treated as an independent instance requiring detailed analysis to identify its root cause. Consequently, cross-domain analysis is not a practical approach for RCA tasks, as the diagnostic process focuses on localizing specific causes rather than analyzing relationships across domains.\\n2. **Purpose of Combining IT and OT Domains:**\\n- The primary goal of combining IT and OT domains in the LEMMA-RCA dataset is to provide a general benchmark that evaluates RCA methods across diverse fault scenarios, ensuring their robustness and adaptability. This follows the precedent set by other benchmark datasets (e.g., Open Graph Benchmark [1], AdBench [2]), where datasets from unrelated domains are combined to facilitate comprehensive evaluations.\\n3. **Evaluation Across Independent Fault Scenarios:**\\n- By treating each fault as an independent case, the LEMMA-RCA dataset ensures that RCA methods can be tested in varied scenarios without needing to analyze inter-domain relationships. This approach aligns with the diagnostic focus of RCA methods and ensures that the dataset remains relevant for practical use.\\n4. **Broader Applicability:**\\n- Combining datasets from multiple domains enhances the diversity of fault scenarios, making the benchmark applicable to a wider range of RCA methods. This approach supports the development and evaluation of generic RCA methods, which are designed to handle faults across different domains without requiring inter-domain dependencies.\\n\\nIn conclusion, the combination of IT and OT datasets is intended to offer a diverse testing ground for RCA methods, helping to assess their performance across a wide range of fault types, rather than focusing on the analysis of relationships between the domains themselves.\\n\\nWe hope this explanation clarifies the rationale behind our dataset\\u2019s multi-domain nature. Thank you again for your valuable feedback.\\n\\n[1] Hu, Weihua, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. \\\"Open graph benchmark: Datasets for machine learning on graphs.\\\" Advances in neural information processing systems 33 (2020): 22118-22133. \\n\\n[2] Han, Songqiao, Xiyang Hu, Hailiang Huang, Minqi Jiang, and Yue Zhao. \\\"Adbench: Anomaly detection benchmark.\\\" Advances in Neural Information Processing Systems 35 (2022): 32142-32159.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Dear reviewer BP9D:\\n\\nThank you for your invaluable feedback. We would like to address your primary concerns and provide corresponding responses below. \\n\\\\\\n\\\\\\n**Response to: What do \\\"IT\\\" and \\\"OT\\\" refer to? Are \\\"Prometheus\\\" and \\\"ElasticSearch\\\" tools or companies? Figure 2(a) would benefit from a more detailed explanation in its caption. In Figure 3, what is the KPI being referenced? How were the root causes for each system fault labeled? The authors may detail their labeling methodology.**\\n\\\\\\n\\\\\", \"a\": \"Thank you for your insightful comment regarding missing data. We agree that missing data is a realistic aspect of real-world systems and can present meaningful challenges for root cause analysis, rather than being viewed as a flaw.\\n\\nIn light of your feedback, we have updated the manuscript to reflect this perspective. Specifically:\\n- We have removed the mention of missing data as a limitation in the updated version.\\n- We have reframed missing data as an inherent characteristic of real-world datasets, highlighting its potential to challenge and improve the robustness of RCA methods.\\n\\nWe appreciate your perspective and believe it adds value to the interpretation of our dataset's characteristics. Please let us know if there are further improvements you would like to see.\"}", "{\"title\": \"Appreciation for Your Feedback and Score Revision\", \"comment\": \"Thank you for taking the time to carefully review our work and for acknowledging the importance of benchmark datasets for the community. We greatly appreciate your thoughtful feedback and the increased score. If there are any remaining concerns or aspects of the paper where you feel improvements can be made, we would be happy to address them during this public discussion period. Your input is invaluable in helping us refine and strengthen our contribution.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**Response to: It seems that two IT ops datasets are not very challenging. If current method can handle the proposed well with only metric modal, what is the meaning of including log modal?**\\n\\\\\\n\\\\\", \"a\": \"Thank you for your valuable feedback. We would like to clarify several points regarding the performance of REASON and the inclusion of the log modality in our datasets.\\n\\n1. Performance Gap in REASON:\\n\\nWhile REASON performs well in some metrics, it is still far from achieving optimal performance, particularly with respect to AP@1 and MAP@3. For example:\\n - MAP@10 = 0.9 might be considered a good score for a recommendation system, but it is not satisfactory for a root cause identification algorithm.\\n - In real-world applications, especially in critical domains like e-commerce, investigating the top 10 potential root causes for a system fault can be highly time-consuming and cost-prohibitive. Such delays can lead to significant financial losses within a very short time frame.\\n - Narrowing the list of top candidates to fewer items with higher confidence (e.g., achieving high MAP@k for smaller k values) is crucial to improving efficiency and reducing costs.\\n\\n2. In our experiments, we still observe a notable performance gap between REASON and the optimal performance, indicating there is substantial room for improvement.\\n\\n3. Log Modality and Its Impact:\\n\\nIn our paper, we provide simple preprocessing methods for the log data to make it usable for root cause analysis. However, these methods might not be ideal, which could explain why most baseline methods, including REASON, tend to perform worse when using only the log data.\\n\\nDespite this, we observe that incorporating log data with metrics data can significantly improve performance. For example:\\n- In the Cloud Computing dataset, REASON\\u2019s AP@1 improves from 16.76% to **33.33%** after incorporating log data.\\n- This demonstrates the value of the log modality in enhancing RCA performance, especially when combined with other data modalities.\\n\\n4. Raw Dataset Accessibility and Sharing:\\n\\nDue to the large size of the raw data, we were unable to share it through platforms like Google Drive. However, we have made the raw data publicly available on Hugging Face to ensure accessibility to the research community. Because of the double-blind review policy, we cannot include the Hugging Face link in this submission version. Upon acceptance, we will update the manuscript to include the link, ensuring transparency and usability for future researchers.\\n\\\\\\n\\\\\\n**Response to: What is the difference between the proposed datasets compared with existing ones, e.g., AIOps data in REASON and the popular train ticket datasets?**\\n\\\\\\n\\\\\\nThank you for your question regarding the differences between the proposed datasets and existing ones, such as the train ticket datasets and AIOps data in REASON. Below, we outline the key distinctions.\\n\\n1. Differences from Train Ticket Datasets:\\n - Nature of System Faults:\\n\\nThe system faults in the train ticket dataset are entirely simulated, whereas the faults in our Product Review and Cloud Computing datasets are collected from real-world deployments. This ensures that the faults in our datasets reflect realistic behaviors and complexities encountered in real systems.\\n\\n - Time Granularity and Length:\\n\\nThe train ticket dataset is collected at a much coarser time granularity (1-minute intervals) compared to our datasets, which are collected at **1-second intervals**. Additionally, the train ticket dataset includes very limited timestamps (approximately 60 timestamps covering 4\\u20135 system faults), whereas our Product Review dataset contains approximately **130,000 timestamps for each system fault**. This richer temporal resolution and length allow for more comprehensive characterization of malfunction patterns.\\n\\n- Number of Nodes:\\n\\nThe train ticket dataset includes data from only 27 nodes, whereas our datasets include data from up to 200 nodes, enabling the analysis of more complex system interactions.\\n\\n2. Differences from AIOps Data in REASON:\\n\\n - Source of AIOps Data:\\n\\nThe AIOps data used in REASON is derived from our Product Review dataset, which we shared with the authors via private communications to facilitate related studies on root cause analysis.\\n - Additional OT Data:\\n\\nIn addition to the Product Review dataset, our release also includes the Cloud Computing dataset, which introduces more complex failure scenarios derived from cloud computing systems. This dataset expands the scope and diversity of scenarios beyond what is available in the AIOps data.\\n\\nWe believe these distinctions highlight the value and uniqueness of our datasets in advancing research on root cause analysis, offering both realism and depth that address limitations in existing datasets.\"}", "{\"title\": \"Thanks for the response.\", \"comment\": \"Thanks for the response.\\n\\n1. \\\"While we mimic the patterns of real system faults during data collection, we emphasize that this is not the same as generating simulated data.\\\"\\nPlease write specifically the process of data collection to avoid miss leading. In such case, it would be also necessary to describe the details about how the authors mimic the patterns in detail (may be in appendix). This would be helpful to evaluate the quality of the data and for follow up works.\\n\\n2. \\\"While SWaT and WADI were originally used for anomaly detection tasks, we are the first to transform these datasets into a root cause analysis (RCA) task.\\\"\\nYes, I agree. But the contribution of transforming an existing datasets into a new task is much less than collecting a new dataset. The authors emphasis a lot of multi-domain datasets and OT domain of paper. But the preprocessing efforts on constructing data is little.\"}", "{\"comment\": \"**Response to: For Product Review datasets, Reason achieves 100% on PR@5 and 95% an MAP@5 with solely metric data. I think this is definitely not far from optimal.**\\n\\\\\\n\\\\\\nWe appreciate the reviewer\\u2019s comment and would like to clarify our definition of \\\"optimal performance.\\\" We refer to optimal performance as achieving **MAP@1 = 1.0 or PR@1 = 1.0**. As we mentioned in our earlier response, in real-world applications\\u2014particularly in critical domains like e-commerce\\u2014investigating the top 5 potential root causes for a system fault can be time-consuming and costly. Such delays can result in significant financial losses, making it important to set a high standard for performance evaluation.\\n\\nWe acknowledge that REASON achieves good performance on the Product Review sub-dataset (e.g., PR@5 = 100% and MAP@5 = 95% using only metric data). However, this is based on one sub-dataset within the IT domain, and REASON\\u2019s performance on the more challenging Cloud Computing sub-dataset is notably lower (e.g., PR@1 = 0.167). This demonstrates that the IT operations datasets in LEMMA-RCA are challenging overall, even if one baseline performs well on a specific sub-dataset.\\n\\\\\\n\\\\\\n**Response to: The difference on injected system faults**\\n\\\\\\n\\\\\\nWe appreciate the reviewer\\u2019s follow-up question and acknowledge that describing the Train Ticket faults as \\\"entirely simulated\\\" may have been unclear. Both datasets involve fault injection in controlled environments. However, the faults in our datasets are designed to reflect real-world scenarios observed in production environments, with differences in **design, scale, and data richness**:\\n\\n1. **Fault Realism**:\\n\\nOur datasets include realistic scenarios such as **Silent Pod Degradation**, **DDoS attacks**, and **cryptojacking**, which emulate operational challenges seen in real IT systems. These faults involve nuanced behaviors, such as subtle latency increases or cascading failures, informed by real deployment experiences.\\n\\n2. **Scale and Granularity**:\\n\\n- **The Product Review Platform** includes 216 pods and six nodes, collecting metrics at 1-second intervals (~130,000 timestamps per fault).\\n\\n- **The Cloud Computing Platform** captures six fault types, with data sourced from AWS CloudWatch Metrics and Logs, offering detailed insights across layers (e.g., API debug logs, MySQL logs).\\n\\n- In contrast, the Train Ticket dataset involves only 27 nodes, with coarser granularity (1-minute intervals) and ~60 timestamps per fault.\\n\\n3. **Data Diversity and Monitoring**:\\n\\nWe collect rich metrics and logs using tools like Prometheus, Elasticsearch, and CloudWatch, enabling comprehensive fault analysis. Faults were monitored for extended periods (e.g., 49 hours per fault in the Product Review Platform), providing high temporal resolution and diverse data types, such as system metrics, API logs, and database logs.\\n\\\\\\n\\\\\\n**Clarification**:\\n\\nWe recognize that the Train Ticket dataset also uses a deployed environment for fault injection. Our intent was to highlight differences in **scale, granularity, and fault complexity**, which make our datasets particularly challenging for root cause analysis.\"}", "{\"comment\": \"**Response to: Your analysis indicates that your task data is time-series data; however, algorithms such as PC, Notears, and CORAL are not designed for time-series data. You should consider incorporating more algorithms specifically designed for time-series data.**\\n\\\\\\n\\\\\", \"a\": \"Thank you for raising this concern. We would like to clarify that **most of the baseline methods we employ**, including CORAL, Dynotears, REASON, MULAN, $\\\\epsilon$-Diagnosis, Nezha, CIRCA, RCD, Baro, and PCMCI, **are specifically designed for analyzing time-series data**, which ensures that the majority of our comparative analysis is meaningful and relevant to the task.\\n**Regarding PC and Notears**, we acknowledge that these methods were not originally designed for time-series data. **However, they have been widely utilized in recent works to detect root causes in time-series datasets similar to ours**, as evidenced by [1], [2], [3], and [4]. We selected these methods as baselines to maintain consistency with the current literature and provide a point of comparison, as they have demonstrated competitive performance in similar applications.\\nWe appreciate your suggestion and believe that this combination of existing literature and time-series-specific baselines strikes a balance between tradition and task-specific relevance. Please let us know if there are specific time-series algorithms you would recommend for inclusion in future work.\\n\\n[1] Wang, Lu, Chaoyun Zhang, Ruomeng Ding, Yong Xu, Qihang Chen, Wentao Zou, Qingjun Chen et al. \\\"Root cause analysis for microservice systems via hierarchical reinforcement learning from human feedback.\\\" In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 5116-5125. 2023.\\n\\n[2] Ikram, Azam, Sarthak Chakraborty, Subrata Mitra, Shiv Saini, Saurabh Bagchi, and Murat Kocaoglu. \\\"Root cause analysis of failures in microservices through causal discovery.\\\" Advances in Neural Information Processing Systems 35 (2022): 31158-31170.\\n\\n[3] Zan, Lei. \\\"Causal Discovery from Heterogenous Multivariate Time Series.\\\" In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, pp. 5499-5502. 2024.\\n\\n[4] Yuan Meng, Shenglin Zhang,, Yongqian Sun, Ruru Zhang, Zhilong Hu, Yiyin Zhang, Chenyang Jia, Zhaogang Wang, Dan Pei, \\u201cLocalizing Failure Root Causes in a Microservice through Causality Inference\\u201c. IWQoS 2020.\\n\\nWe hope this addresses your concerns and welcome any additional feedback. Thank you!\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Dear reviewer E7LX\\uff1a\\n\\nThank you for your invaluable feedback. We would like to address your primary concerns and provide corresponding responses below. \\n\\\\\\n\\\\\\n**Response to: Could the authors consider including the dependency graph? Could the authors benchmark the baselines using the dependency graph instead of the causal graph inferred by the PC? ...** \\n\\\\\\n\\\\\", \"a\": \"Thank you for your feedback regarding the explanation of experimental results.\\nIn the updated version of the manuscript, we have provided a more detailed explanation of the experimental results in Section 4.2, with the new additions highlighted in red for ease of reference.\\nIf there are specific aspects of the results that require further clarification, we would be happy to elaborate further.\"}", "{\"summary\": \"The paper presents Lemma-RCA, a novel dataset and benchmark designed for root cause analysis (RCA). This dataset includes four sub-datasets: two from IT environments and two from OT environments, offering a large-scale, multi-modality dataset (with both KPI and log data) that captures real-world system faults. The authors validate the dataset\\u2019s quality by testing it with eight baseline models across offline single-/multi-modality and online single-modality settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The dataset is open-source, multi-modal, and well-suited to RCA, making it both timely and relevant.\", \"The authors have provided a thorough review of existing baseline approaches.\"], \"weaknesses\": [\"The dataset description could be more accessible to a broader audience, as suggested in the questions below.\", \"Reproducibility is limited due to insufficient implementation details for the baseline models.\"], \"questions\": \"1. **Clarification:**\\n- The current data collection section seems tailored for domain experts and could benefit from clarification for a general audience. For instance, what do \\\"IT\\\" and \\\"OT\\\" refer to? Are \\\"Prometheus\\\" and \\\"ElasticSearch\\\" tools or companies? Clarifying the meanings of such terms would improve the accessibility. \\n- Figure 2(a) would benefit from a more detailed explanation in its caption.\\n- In Figure 3, what is the KPI being referenced? Should it be assumed that all KPIs in this figure relate to system latency? Please specify the y-axis further.\\n- How were the root causes $V_a$ for each system fault $a$ labeled? The authors may include a section in the paper detailing their labeling methodology.\\n2. **Evaluation Metrics:** The evaluation metrics appear to be sample-independent. Why did the authors not consider sample-dependent metrics? For example, over a 10-day system run yielding 1000 faults, the accuracy of the prediction algorithm could be tested against the actual root cause labels.\\n\\n3. **Data Quality Claims:** The authors suggest high data quality based on baseline comparisons. This conclusion seems somewhat overstated, as the main insight from these experiments appears to be that \\\"MRR performance improves when considering two modalities jointly.\\\"\\n\\n**Comment on Missing Data:** While the authors view missing data as a limitation, I consider it a realistic aspect of real-world data, which poses a meaningful challenge rather than a flaw.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Response to: Uniformity of decimal places**\", \"a\": \"Thank you for highlighting this inconsistency in the presentation of our results. We sincerely apologize for not fully standardizing all the values in our previous revision. Following your feedback, **we have thoroughly reviewed and ensured that all reported results across the manuscript are now presented uniformly in three decimal places**, including those in Table 3, row 452, row 423, and the entirety of Appendix C. We appreciate your attention to detail, as this has helped us improve the clarity and professionalism of our work.\\n\\n**Response to: Standard deviation of experimental results**\\n\\nWe thank the reviewer for the valuable suggestion to include standard deviations as a measure of the stability of our results. To address this, we conducted additional experiments across ten baselines. For each method, we performed five independent runs on the Product Review dataset and computed both the mean and standard deviation using the following formula:\\n\\n$$\\n\\\\text{Std} = \\\\sqrt{\\\\frac{\\\\sum_{i=1}^{n}(x_i - \\\\bar{x})^2}{n-1}}\\n$$\\n\\n\\nwhere $x_i$ is the result of the $i$-th run, $\\\\bar{x}$ is the mean result, and $n$ (5 in this case) is the total number of runs.\\n\\nThe updated results, including standard deviations for metrics such as PR@1, PR@5, PR@10, MRR, MAP@3, MAP@5, and MAP@10, are presented in the table below:\\n\\n| Model | PR@1 (\\u00b1 Std) | PR@5 (\\u00b1 Std) | PR@10 (\\u00b1 Std) | MRR (\\u00b1 Std) | MAP@3 (\\u00b1 Std) | MAP@5 (\\u00b1 Std) | MAP@10 (\\u00b1 Std) |\\n|-------------|------------------|------------------|------------------|-----------------|------------------|------------------|------------------|\\n| \\u03b5-Diagnosis | 0.000 (\\u00b1 0.010) | 0.000 (\\u00b1 0.010) | 0.000 (\\u00b1 0.020) | 0.017 (\\u00b1 0.010) | 0.000 (\\u00b1 0.010) | 0.000 (\\u00b1 0.010) | 0.000 (\\u00b1 0.010) |\\n| GOLEM | 0.000 (\\u00b1 0.000) | 0.000 (\\u00b1 0.000) | 0.250 (\\u00b1 0.020) | 0.043 (\\u00b1 0.030) | 0.000 (\\u00b1 0.000) | 0.000 (\\u00b1 0.000) | 0.025 (\\u00b1 0.040) |\\n| PC | 0.000 (\\u00b1 0.000) | 0.000 (\\u00b1 0.000) | 0.250 (\\u00b1 0.000) | 0.053 (\\u00b1 0.040) | 0.000 (\\u00b1 0.000) | 0.000 (\\u00b1 0.000) | 0.050 (\\u00b1 0.000) |\\n| RCD | 0.000 (\\u00b1 0.020) | 0.000 (\\u00b1 0.020) | 0.500 (\\u00b1 0.030) | 0.067 (\\u00b1 0.010) | 0.000 (\\u00b1 0.020) | 0.000 (\\u00b1 0.010) | 0.175 (\\u00b1 0.020) |\\n| Dynotears | 0.000 (\\u00b1 0.000) | 0.000 (\\u00b1 0.000) | 0.500 (\\u00b1 0.020) | 0.070 (\\u00b1 0.030) | 0.000 (\\u00b1 0.000) | 0.000 (\\u00b1 0.000) | 0.075 (\\u00b1 0.030) |\\n| CIRCA | 0.000 (\\u00b1 0.020) | 0.500 (\\u00b1 0.030) | 0.500 (\\u00b1 0.020) | 0.250 (\\u00b1 0.030) | 0.333 (\\u00b1 0.020) | 0.400 (\\u00b1 0.010) | 0.450 (\\u00b1 0.020) |\\n| PCMCI | 0.250 (\\u00b1 0.030) | 0.500 (\\u00b1 0.020) | 0.500 (\\u00b1 0.010) | 0.342 (\\u00b1 0.040) | 0.250 (\\u00b1 0.030) | 0.300 (\\u00b1 0.020) | 0.400 (\\u00b1 0.010) |\\n| C-LSTM | 0.250 (\\u00b1 0.040) | 0.750 (\\u00b1 0.010) | 0.750 (\\u00b1 0.030) | 0.474 (\\u00b1 0.020) | 0.500 (\\u00b1 0.050) | 0.250 (\\u00b1 0.010) | 0.675 (\\u00b1 0.050) |\\n| BARO | 0.500 (\\u00b1 0.010) | 0.500 (\\u00b1 0.020) | 0.500 (\\u00b1 0.010) | 0.500 (\\u00b1 0.010) | 0.500 (\\u00b1 0.020) | 0.500 (\\u00b1 0.010) | 0.500 (\\u00b1 0.010) |\\n| REASON | 0.750 (\\u00b1 0.020) | 1.000 (\\u00b1 0.010) | 1.000 (\\u00b1 0.010) | 0.875 (\\u00b1 0.020) | 0.917 (\\u00b1 0.020) | 0.950 (\\u00b1 0.010) | 0.975 (\\u00b1 0.010) |\\n\\nWe hope this addresses your concern and provides the additional detail requested. Thank you for your suggestion, which has allowed us to expand our analysis.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**Response to: The dataset includes only IT and OT domains, which appear to be a simple combination of two unrelated domains. Additionally, the limited data collection period, such as the 11 days for the OT system, may not capture long-term trends.**\\n\\\\\\n\\\\\", \"a\": [\"Thank you for your thoughtful feedback. We would like to address the concerns regarding the multi-domain nature of the dataset and the data collection period.\", \"1. Multi-Domain Dataset Contribution:\", \"The primary goal of LEMMA-RCA is to provide a dataset that evaluates the performance of root cause analysis (RCA) methods across **multiple tasks from distinct domains**, specifically IT and OT systems.\", \"These domains were chosen because they represent two fundamentally different environments with unique challenges for RCA, allowing researchers to benchmark methods across diverse scenarios.\", \"Importantly, the dataset is not intended for evaluating cross-domain RCA performance but rather for investigating RCA techniques within individual domain contexts. The combination of IT and OT domains enhances the dataset\\u2019s applicability without introducing unnecessary cross-domain complexity.\", \"2. Sufficiency of the Data Collection Period:\", \"The 11-day data collection period for the OT system was designed to be comprehensive enough to capture both **normal and malfunction** patterns associated with system faults.\", \"Collecting data over longer periods to include long-term trends is not necessary for the following reasons:\", \"Focus on Malfunction Patterns: The primary focus of RCA is on understanding malfunction behaviors. Prolonged collection of normal patterns would not add significant value to fault analysis.\", \"Malfunction Patterns Duration: Malfunction patterns lasting several hours to a day are sufficient to exhibit system behaviors related to faults. These durations provide ample data for meaningful RCA evaluations.\", \"In real-world applications, such as in e-commerce platforms, faults must be identified promptly to avoid significant financial losses. Faults that persist for hours or days without resolution can lead to substantial disruptions and costs, making short-term data more relevant for RCA tasks.\", \"We believe the multi-domain design and focused collection period of LEMMA-RCA strike a balance between diversity, depth, and real-world relevance, ensuring the dataset's utility for advancing RCA research. If additional clarifications or enhancements are required, we are happy to provide further details.\"]}", "{\"title\": \"Response to authors\", \"comment\": \"Dear authors:\\n\\nI do appreciate your work! However, you did not address my concers.\\n\\n> analyze whether these methods exhibit similar performance distinctions on this dataset.\\n\\nHowever, you did not provide me with targeted responses, experimental results, or meaningful analysis; you only mentioned conducting some additional experiments.\\n\\n> the dataset includes only IT and OT domains, which appear to be a simple combination of two unrelated domains\\n\\nFurthermore, you did not address my concerns regarding the multi-domain aspect you proposed, such as analyzing the relationship between the two domains and whether it is necessary to combine the OT and IT datasets. \\n\\n> it is not clarified whether these platforms the data come from are sufficiently representative to ensure the quality of the data \\n\\nyou did not analyze whether the platforms from which the data is sourced are representative.\"}", "{\"title\": \"Thanks Reviewer E7LX for Your Feedback: Response on Baselines, Results, and Data Accessibility\", \"comment\": \"Thank you for your constructive feedback and for actively engaging with our work during the public discussion phase. We appreciate your thoughtful comments and suggestions.\\n\\n1. **Partial Dependency Graphs**: \\n Following your suggestions, we have uploaded the semi-complete dependency graphs in CSV format, which you can access [here](https://drive.google.com/drive/u/4/folders/1mUkgidLaQlfH2Ka8bIq38pQNLcKvyySZ). We will also make the code for the causal discovery experiments available to support reproducibility and further research. \\n\\n2. **Selection of Baselines**: \\n Regarding the question about the selection of baselines, we have addressed this in our response to Reviewer UCbX. To summarize briefly, most of the methods we employ\\u2014such as CORAL, REASON, MULAN, \\u03f5-Diagnosis, Nezha, CIRCA, RCD, and Baro\\u2014are state-of-the-art algorithms specifically designed for root cause analysis in time-series data. This ensures that the comparative analysis is both meaningful and relevant. For PC and Notears, while they were not originally designed for time-series data, they have been widely adopted in recent literature for detecting root causes in similar datasets, as shown in: \\n- Wang, Lu, Chaoyun Zhang, Ruomeng Ding, Yong Xu, Qihang Chen, Wentao Zou, Qingjun Chen et al. \\\"Root cause analysis for microservice systems via hierarchical reinforcement learning from human feedback.\\\" In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 5116-5125. 2023.\\n- Ikram, Azam, Sarthak Chakraborty, Subrata Mitra, Shiv Saini, Saurabh Bagchi, and Murat Kocaoglu. \\\"Root cause analysis of failures in microservices through causal discovery.\\\" Advances in Neural Information Processing Systems 35 (2022): 31158-31170.\\n- Zan, Lei. \\\"Causal Discovery from Heterogenous Multivariate Time Series.\\\" In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, pp. 5499-5502. 2024.\\n- Yuan Meng, Shenglin Zhang,, Yongqian Sun, Ruru Zhang, Zhilong Hu, Yiyin Zhang, Chenyang Jia, Zhaogang Wang, Dan Pei, \\u201cLocalizing Failure Root Causes in a Microservice through Causality Inference\\u201c. IWQoS 2020.\\n \\nThese references validate their utility and relevance for time-series root cause analysis tasks. \\n\\n3. **More Thorough Presentation of Results**: \\n We encourage the reviewer to refer to Section 4.2 of the updated manuscript, where the red-highlighted text provides a detailed explanation of the experimental results, performance distinctions, and underlying reasoning. Quantitative results are presented in Tables 3 and 4, and we have included additional details in Appendix K for further clarity. If there are specific aspects of the results that require further elaboration, we would be happy to address them. \\n\\nThank you once again for your valuable feedback and for recognizing the contributions of our work. We remain committed to addressing your concerns and strengthening the impact of this research.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**Response to: The authors conducted some preprocessing to convert logs to time series for evaluation. But the open-sourced datasets do contain all the original logs, right?**\\n\\\\\\n\\\\\", \"a\": \"Thank you for your question. We would like to clarify that the original logs are indeed included in the datasets.\\n\\nIn our paper, we provide simple preprocessing methods to convert logs into time series for evaluation. However, we also release the **raw datasets**, including the original logs, to allow users to preprocess the data using their own methods and potentially achieve better performance.\\n\\nDue to the large size of the raw data, we could not upload it to platforms like Google Drive. Instead, we have made the raw data publicly available on **Hugging Face**, ensuring accessibility to the research community. However, because of the double-blind review policy, we are unable to include the Hugging Face link in this submission version. We will include this link upon acceptance to facilitate access for future research.\"}", "{\"comment\": \"**Response to: 1. Could you please point me to where I can find the partial dependency graphs in the public datasets, if they are already released.**\\n\\\\\\n\\\\\", \"a\": \"Thank you for your suggestion! We have included the additional baselines in the main paper. Their definitions are detailed in **Section 4.1** (Baseline Definition), and the corresponding experimental results are presented in **Table 3**.\\n\\\\\\n\\\\\\n**Response to: 5. Do you mean that you computed the causal graph separately for each test case? Also, which four cases?**\\n\\\\\\n\\\\\\nThank you for your question! Yes, we computed the causal graph separately for each failure case, as the RCA task is inherently case-by-case. The four cases refer to the four distinct system failures included in the Product Review dataset.\\n\\nFor each failure case, we computed the metrics individually and then averaged the results across all four cases to ensure consistency and comparability.\\n\\nWe hope this clarifies your concern! If there is anything unclear in our previous response to any question, please let us know, and we will gladly provide further clarification.\"}", "{\"title\": \"Follow-Up on Feedback Before Discussion Phase Ends\", \"comment\": \"Dear Reviewer UCbX,\\n\\nThank you for your valuable feedback on our paper. As the ICLR public discussion phase is ending soon, we would like to confirm if our responses have fully addressed your concerns. If there are any remaining issues, we\\u2019d be happy to provide further clarifications.\\n\\nIf you feel that all concerns have been resolved, we hope this could be reflected in your evaluation.\\n\\nWe sincerely appreciate your time and thoughtful input!\"}", "{\"comment\": \"We apologize for missing one of your questions in our previous response.\", \"regarding_hyperparameter_tuning_for_the_baselines\": \"We used the default parameters provided by the respective methods. This approach was chosen to ensure a fair comparison, as default values are typically selected by the original authors to represent standard or well-performing configurations. Additionally, key parameters, such as the time lag, were kept consistent across all experiments to maintain uniformity and fairness in the evaluation process.\\n\\nTo address your concern more thoroughly, we have updated the Section 4.1 in our paper to explicitly reflect the details of the parameter tuning process.\\n\\nWe hope this addresses your concerns and welcome any additional feedback. Thank you!\"}", "{\"title\": \"Response to Reviewer BP9D\", \"comment\": \"**Response to: Issue of Overstatement**\\n\\\\\\n\\\\\\nThank you for your prompt and insightful feedback on avoiding overstatement regarding the evaluation of LEMMA-RCA. We agree that the performance of baselines alone cannot fully determine the quality of a dataset, and we appreciate your emphasis on the importance of precise wording. To address your concern, we have revised the original statement to ensure it accurately reflects the scope of our empirical study without implying that baseline performance serves as a direct evaluation of dataset quality.\", \"the_revised_statement_now_reads\": \">\\u201cWe evaluate the performance of ten baseline methods on LEMMA-RCA.\\u201d\\n\\nThis revision removes any implication that the baseline tests are intended to evaluate the dataset\\u2019s quality and instead presents them as a component of our empirical study. \\n\\nWe hope this change aligns with your recommendation and adequately addresses your concern. Please let us know if further clarification or revisions are needed.\\n\\\\\\n\\\\\\n**Response to: Baseline Implementation Details**\\n\\\\\\n\\\\\\nThank you for your valuable feedback on including parameter details and discussing sensitivity to hyper-parameter tuning. We have made the following updates to address your concerns:\\n\\n1. Baseline Parameter Settings\\n\\nWe have added the key parameter settings for all baseline models to the appendix for transparency and replicability. Below is a summary of the configurations:\\n- Dynotears:\\n - lag=20 (maximum time lags), lambda_w=1e-3 (weight regularization), lambda_a=1e-3 (autoregressive term regularization), g_thre=0.3 (sparsity threshold).\\n- PC:\\n - alpha=0.05 (significance level for conditional independence tests), ci_test='fisherz' (type of conditional independence test).\\n- C-LSTM:\\n - hidden=100 (hidden units in LSTM), lag=20 (maximum time lags for sequence modeling), lam=10.0 (model complexity regularization), lam_ridge=1e-2 (ridge regression regularization), lr=1e-3 (learning rate), max_iter=30000 (maximum iterations), g_thre=0.3 (sparsity threshold).\\n- GOLEM:\\n - lambda_1=2e-2 (weight for sparsity regularization), lambda_2=5.0 (weight for smoothness regularization), learning_rate=1e-3 (optimization learning rate), num_iter=30000 (number of iterations for training), g_thre=0.3 (sparsity threshold).\\n- REASON:\\n - lag=20 (maximum time lags for causal modeling), L=150 (hidden layers with 150 units each), lambda1=1 (adjacency matrix sparsity regularization), lambda2=1e-2 (autoregressive term balancing regularization), gamma=0.8 (integration of individual and topological causal effects), g_thre=0.3 (sparsity threshold).\\n\\nThese settings have been comprehensively detailed in Appendix I to support replication.\\n\\n2. Hyper-Parameter Sensitivity Analysis\\n\\nTo provide further insights into hyper-parameter tuning, we conducted sensitivity analyses for key parameters of the REASON model using the Product Review subdataset. Below are the results:\\n\\n**\\\\($\\\\gamma$\\\\) Sensitivity**:\\n\\n| \\\\($\\\\gamma$\\\\) | MAP@10 | MRR |\\n|------------|--------|-------|\\n| 0.1 | 0.80 | 0.81 |\\n| 0.2 | 0.80 | 0.81 |\\n| 0.3 | 0.84 | 0.82 |\\n| 0.4 | 0.86 | 0.83 |\\n| 0.5 | 0.88 | 0.83 |\\n| 0.6 | 0.88 | 0.73 |\\n| 0.7 | 0.86 | 0.83 |\\n| 0.8 | 0.92 | 0.84 |\\n| 0.9 | 0.90 | 0.74 |\\n\\n**Analysis**: The optimal \\\\($\\\\gamma$\\\\) is 0.8, achieving the best MAP@10 (0.92) and MRR (0.84). This indicates that balancing individual and topological causal effects is crucial for model performance.\\n\\n---\\n\\n**\\\\(L\\\\) Sensitivity**:\\n\\n| \\\\(L\\\\) | MAP@10 | MRR |\\n|------------|--------|-------|\\n| 10 | 0.52 | 0.50 |\\n| 20 | 0.33 | 0.25 |\\n| 50 | 0.37 | 0.32 |\\n| 100 | 0.42 | 0.28 |\\n| 150 | 0.53 | 0.50 |\\n| 200 | 0.37 | 0.33 |\\n\\n**Analysis**: The best performance is observed at \\\\(L=150\\\\), where MAP@10 and MRR reach 0.53 and 0.50, respectively. This suggests that an appropriate hidden layer size balances model capacity and complexity, avoiding underfitting or overfitting.\\n\\n3. Summary\\n\\nWe have incorporated detailed parameter settings for all baselines in the Appendix I and provided a dedicated discussion on hyper-parameter sensitivity, addressing both replicability and clarity. We hope this addresses your concerns and welcome any additional feedback. Thank you!\"}", "{\"title\": \"Thanks for the reply.\", \"comment\": \"Thanks for the reply, which address some of my concerns. And I do think benchmark datasets are important for the community. I have raised my scores from 5 to 6.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**Response to: The evaluation metrics appear to be sample-independent. Why did the authors not consider sample-dependent metrics?**\\n\\\\\\n\\\\\", \"a\": \"Thank you for your feedback regarding the interpretation of our experimental results and data quality claims.\\nWe agree that the statement about data quality based on baseline comparisons could be better clarified. To address this:\\n\\n1. Clarification of Data Quality Claims:\\n- Our data quality claims are not solely based on the baseline comparisons but also on the design of the dataset, which includes diverse and realistic fault scenarios collected from real-world systems.\\n- The experiments demonstrate that the joint use of metrics and log modalities improves performance (e.g., MRR), highlighting the complementary nature of these data modalities and the richness of the dataset in capturing fault-relevant patterns.\\n2. Additional Explanation in Section 4.2:\\n- We have expanded the explanation of the experimental results in the updated version of Section 4.2 (highlighted in red). This includes a more detailed discussion of how the dataset facilitates performance differentiation across various methods and modalities.\\n3. Data Quality Beyond Metrics:\\n- Beyond MRR improvements, the dataset\\u2019s quality is evident in its scale, diversity, and real-world relevance, which are not fully captured by any single performance metric. For example, the dataset includes high temporal resolution and a large number of nodes, providing comprehensive coverage of system behaviors and fault scenarios.\\n\\nWe hope these clarifications address your concerns and provide a more nuanced understanding of our data quality claims. If additional explanations are needed, we are happy to elaborate further.\"}", "{\"summary\": \"This paper introduces Lemma-RCA, a dataset designed for root cause analysis. Lemma-RCA has distinctive and appreciable characteristics like large-scale, multi-modal nature and spans two domains: IT and OT. It includes test logs and time-series data, capturing KPI metrics across several interconnected pods over more than 100,000 timestamps. Notably, the dataset provides ground-truth annotations for both the exact fault occurrence times and the root cause components. This level of detail makes Lemma-RCA a valuable resource for advancing research in RCA.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Lemma-RCA is a large, multi-model and multi-domain dataset that includes data from both IT and OT domains. It has over 100,000 timestamps across several connected pods, with a rich mix of test logs and time-series data. This dataset will be valuable for testing and improving future RCA methods.\", \"Unlike most other datasets, Lemma-RCA provides exact ground-truth labels, showing both when faults happened and the specific components responsible.\", \"The paper builds on past studies that highlight using causal structure-based methods for RCA. The authors compare Lemma-RCA with causal discovery methods and other recent RCA models.\", \"**Clarity and Presentation**: The paper is well-organized, with clear visuals and a smooth flow that makes it easy to understand in a single read.\"], \"weaknesses\": \"**Missing Dependency Graph**: A key limitation of Lemma-RCA is the absence of a dependency graph, which prior datasets like PetShop provided as a causal graph. This dependency graph is critical for RCA, as it allows more direct evaluations of causal discovery methods. The paper seems to already hint the partial dependency graph in Figure 1(a). I wonder if the authors could add the full dependency graph along with the datasets.\\n\\n**Insufficient Explanation of Baseline Approaches:** The paper does not include explanations of the baseline approaches used, even in the appendix. Although prior work is cited, providing brief descriptions of each benchmarked approach, particularly the high-performing REASON method, would enhance the reader\\u2019s understanding of the comparative results.\\n\\n\\n**Limited Explanation of Experimental Results**: The experimental results focus primarily on causal discovery approaches, but they lack in-depth analysis of why these methods failed. The authors' insights and intuition about why each method achieved the numbers reported in the table could significantly enhance the understanding of the experiment section. For instance, suppose we assume that the dependency graph is the true causal graph as in PetShop. Then can the authors establish how far the PC's predicted causal graph is from the true dependency graph. This would at least give us a sense of the causal discovery performance and put the RCA results in context. For instance, if the causal discovery performance is very poor, there is no meaning in expecting the methds like PC, GOLEM, etc. to perform better in predicting root causes. Additionally, one interesting experiment to run would be evaluating the causal graph based baselines on the true dependency graph, instead of the one inferred from observational data by PC.\\n\\n**Choice of Baseline Algorithms:** Given that the dataset is timestamped, it cannot be assumed that each record is i.i.d. Some causal discovery methods, like those in the Tigramite package (https://jakobrunge.github.io/tigramite/), are tailored for time-series data. It is unclear why the authors chose standard PC over these alternatives, which may be more suitable for time-dependent causal discovery.\\n\\n\\nFinally, some important prior RCA works appear to be missing among the benchmarked methods. For example, the paper by Pham et al. (2024) on BARO highlights that inaccurate RCA predictions can result when a method fails to learn the correct causal graph. Including such approaches would provide a more thorough baseline comparison and strengthen the evaluation.\\n\\n[1] Pham L, Ha H, Zhang H. Baro: Robust root cause analysis for microservices via multivariate bayesian online change point detection. Proceedings of the ACM on Software Engineering. 2024 Jul 12;1(FSE):2214-37.\", \"questions\": \"1. Could the authors consider including the dependency graph? Having this graph like in petshop seems like a deal breaker to me.\\n\\n2. Could the authors benchmark the baselines using the dependency graph instead of the causal graph inferred by the PC?\\n\\n3. For the CIRCA method as well, could the authors provide results based on the dependency graph?\\n\\n4. The experiments section needs more and a systematic explanation on why each method performed better or worse.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. However, I still have concerns about whether this dataset can accurately evaluate the algorithms. I will keep my score.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**Response to: Reproducibility is limited due to insufficient implementation details for the baseline models.**\\n\\\\\\n\\\\\", \"a\": \"Thank you for your feedback regarding the reproducibility of our baseline model implementations.\\n\\nWe would like to clarify that we have released the source code for all the baseline methods used in our experiments to ensure reproducibility. The source code is accessible via the anonymous GitHub link provided in the abstract. To access it:\\n\\n- Open the link provided in the abstract.\\n- Click on the Source option in the menu at the top-right corner.\\n- From there, click on GitHub to access the repository containing the implementation details.\\n\\nThis ensures that all baseline implementations are transparent and reproducible. If there are specific aspects of the implementation that require further clarification, please let us know, and we will be happy to address them.\"}", "{\"comment\": \"**Response to: Please write specifically the process of data collection to avoid miss leading**\\n\\\\\\n\\\\\\nWe thank the reviewer for their valuable feedback. In response, we have expanded Appendix B to provide a detailed description of the processes used to induce system faults and mimic real-world patterns during data collection. This includes specific methodologies, data collection tools, and analysis techniques for each fault scenario. These enhancements aim to clarify our approach and ensure that future researchers can replicate and evaluate the quality of our data.\\n\\\\\\n\\\\\\n**Response to: But the contribution of transforming an existing datasets into a new task is much less than collecting a new dataset. The authors emphasis a lot of multi-domain datasets and OT domain of paper.**\\n\\\\\\n\\\\\\nWe appreciate the reviewer\\u2019s feedback regarding the emphasis on multi-domain datasets and the preprocessing efforts for transforming existing datasets. While we agree that collecting a new dataset is a significant contribution, we would like to clarify that the primary contribution of our work lies in the collection of novel datasets from the IT domain, specifically the Product Review and Cloud Computing datasets.\\n\\nAdditionally, we acknowledge that transforming existing datasets like SWaT and WADI into a root cause analysis (RCA) task represents a smaller effort compared to collecting entirely new datasets. However, this transformation is an essential step for addressing the lack of RCA-specific datasets in the operational technology (OT) domain, ensuring broader applicability and relevance.\\n\\nTo better align with the reviewer\\u2019s concerns, we have revised the relevant sentences in the paper: e.g.,\", \"original\": \"\\\"LEMMA-RCA is multi-domain, encompassing real-world applications such as IT operations and water treatment systems, with \\\\textbf{hundreds of system entities} involved.\\\"\", \"revised\": \"\\\"LEMMA-RCA encompasses real-world applications such as IT operations and water treatment systems, with \\\\textbf{hundreds of system entities} involved.\\\"\\n\\nThis revision removes the term \\\"multi-domain\\\" to focus more explicitly on the practical applications and the contribution of collecting new datasets from the IT domain while maintaining an accurate representation of our work.\"}", "{\"title\": \"Reply to Reviewer UCbX\", \"comment\": \"**Response to: You did not provide me with targeted responses, experimental results, or meaningful analysis; you only mentioned conducting some additional experiments.**\", \"a\": \"We understand the reviewer\\u2019s concern regarding the representativeness of the dataset. While it is challenging to establish a universal metric for representativeness in benchmarks, we have made significant efforts to ensure the dataset covers diverse fault scenarios:\\n\\n1. **Real-World Fault Scenarios:**\\n- The IT domain datasets (Product Review and Cloud Computing) encompass realistic microservice faults such as out-of-memory errors, DDoS attacks, and cryptojacking, as outlined in Section 3.1 and Appendix B. Similarly, the OT domain datasets (SWaT and WADI) include real-world cyber-physical system faults recorded in controlled environments.\\n2. **Diversity of Fault Types:**\\n- Across IT and OT domains, we include 10 distinct fault types, ensuring coverage of both transient and persistent system failures. This diversity reflects common issues faced by modern IT and OT systems.\\n3. **Comparative Analysis:**\\n- As seen in Table 3 and related discussions, our dataset exhibits performance trends consistent with other benchmarks (e.g., Petshop), supporting its credibility as a representative evaluation platform.\\n4. **Quality Assurance:**\\n- All data were collected using industry-standard monitoring tools like Prometheus, CloudWatch, and Elasticsearch. Each fault scenario was validated to ensure it mirrors real-world conditions.\\n\\nWe hope this expanded response addresses the reviewer\\u2019s concerns comprehensively. Should further clarification or additional analysis be needed, we are happy to provide it.\"}", "{\"summary\": \"This paper presents LEMMA-RCA, a large-scale, multi-modal, and multi-domain dataset specifically designed for Root Cause Analysis (RCA) in complex systems. The dataset includes real-world fault cases from IT and OT operational systems, covering microservices, water treatment, and distribution systems to support a wide range of RCA tasks. To validate the effectiveness of LEMMA-RCA, the authors evaluated various RCA methods on this dataset, demonstrating its diversity and utility across offline and online settings as well as single and multi-modal data scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(1)\\tLEMMA-RCA is the first public dataset specifically designed for root cause analysis, covering two domains\\u2014IT and OT.\\n(2)\\tThe paper thoroughly tests multiple existing RCA methods on LEMMA-RCA, demonstrating the dataset\\u2019s quality and multi-modal value.\\n(3)\\tBy making LEMMA-RCA freely accessible, the paper lowers research barriers, encouraging collaboration between academia and industry and enhancing the generalizability and practical impact of RCA methodologies.\", \"weaknesses\": \"(1)\\tOne contribution of this study is the introduction of LEMMA-RCA, the first multi-domain dataset for RCA. However, the dataset includes only IT and OT domains, which appear to be a simple combination of two unrelated domains, thus raising questions about the solidity of this contribution. Additionally, the limited data collection period, such as the 11 days for the OT system, may not capture long-term trends, potentially limiting its applicability to broader fault analysis scenarios.\\n(2)\\tThe figures in this study are unclear, heavily relying on screenshots. \\n(3)\\tThe experimental analysis tables lack consistency in reporting, with varying decimal places and an absence of standard deviation reporting.\", \"questions\": \"(1)\\tThe author should provide more valuable data rather than simply assembling data. Additionally, it is not clarified whether these platforms the data come from are sufficiently representative to ensure the quality of the data and the data collection period appears to be rather short, making it difficult to establish whether the dataset adequately captures a wide range of fault patterns and system behaviors.\\n(2)\\tThe experiments designed by the authors do not seem sufficient to demonstrate the value of the dataset. I suggest that the authors select several widely recognized RCA methods with known performance differences and analyze whether these methods exhibit similar performance distinctions on this dataset.\\n(3)\\tThe author can pay more attention to the readability of the figures in the paper and the normalization of the experimental results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0R3ha8oNPU
SecCodePLT: A Unified Platform for Evaluating the Security of Code GenAI
[ "Yu Yang", "Yuzhou Nie", "Zhun Wang", "Yuheng Tang", "Wenbo Guo", "Bo Li", "Dawn Song" ]
Existing works have established multiple benchmarks to highlight the security risks associated with Code GenAI. These risks are primarily reflected in two areas: a model’s potential to generate insecure code (insecure coding) and its utility in cyberattacks (cyberattack helpfulness). While these benchmarks have made significant strides, there remain opportunities for further improvement. For instance, many current benchmarks tend to focus more on a model’s ability to provide attack suggestions rather than its capacity to generate executable attacks. Additionally, most benchmarks rely heavily on static evaluation metrics (e.g., LLM judgment), which may not be as precise as dynamic metrics such as passing test cases. Furthermore, some large-scale benchmarks, while efficiently generated through automated methods, could benefit from more expert verification to ensure data quality and relevance to security scenarios. Conversely, expert-verified benchmarks, while offering high-quality data, often operate at a smaller scale. To address these gaps, we develop SecCodePLT, a unified and comprehensive evaluation platform for code GenAIs' risks. For insecure code, we introduce a new methodology for data creation that combines experts with automatic generation. Our methodology ensures the data quality while enabling large-scale generation. We also associate samples with test cases to conduct code-related dynamic evaluation. For cyberattack helpfulness, we set up a real environment and construct samples to prompt a model to generate actual attacks, along with dynamic metrics in our environment. We conduct extensive experiments and show that SecCodePLT outperforms the state-of-the-art (SOTA) benchmark CyberSecEval in security relevance. Furthermore, it better identifies the security risks of SOTA models in insecure coding and cyberattack helpfulness. Finally, we apply SecCodePLT to the SOTA code agent, Cursor, and, for the first time, identify non-trivial security risks in this advanced coding agent.
[ "Code Generation", "Cybersecurity", "Safety", "Large Language Models" ]
Reject
https://openreview.net/pdf?id=0R3ha8oNPU
https://openreview.net/forum?id=0R3ha8oNPU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ubnSXCXgmq", "t2zDId3sv1", "pQk1mUkqnG", "nAPm1nyqdA", "mg7kx8VtAD", "jItYYM4uaa", "gDNfqs0T16", "fxpRHOfI8S", "fvTiNmkP7r", "ev2FHhbJWz", "SVblt9HyLZ", "RGRntMfkgI", "LpEBc0wU68", "IZj3wJ3UmO", "IHaHl6kotH", "HKwSGidXQ0", "Fw61saRhPb", "CEREJQ5rY9", "6faWUQQbi6", "474eBTXJNF" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment" ], "note_created": [ 1730507200313, 1730774568620, 1732148249911, 1733213814431, 1737524214426, 1732147985101, 1732148745678, 1732604114318, 1732148279454, 1730634211599, 1732148182954, 1732147863561, 1732148023655, 1732148715518, 1733207423287, 1732148072461, 1733205179215, 1730690556309, 1734808254119, 1732507945031 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12776/Reviewer_N2DR" ], [ "ICLR.cc/2025/Conference/Submission12776/Reviewer_8iyW" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Reviewer_fo1m" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Reviewer_N2DR" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Authors" ], [ "ICLR.cc/2025/Conference/Submission12776/Reviewer_3Qi9" ], [ "ICLR.cc/2025/Conference/Submission12776/Area_Chair_eLbw" ], [ "ICLR.cc/2025/Conference/Submission12776/Reviewer_N2DR" ] ], "structured_content_str": [ "{\"summary\": \"This paper provides a benchmark for evaluating security issues associated with LLM generated code. Specifically covering:\\ni) Secure code generation: to assess LLMs ability to generate secure code (focusing on Python). \\nii) Cyber attack helpfulness: to evaluate a model\\u2019s capability in facilitating end-to-end cyberattacks.\\nThey apply 4 LLMs to both benchmarks -- CodeLlama-34B-Instruct, Llama-3.1-70B, Mixtral-8\\u00d722B, GPT-4o \\u2013 and compare their performance.\\n\\n**Secure code generation benchmark:** \\nThe authors manually created 153 seed tasks covering 27 CWEs relevant to python \\u2013 then used LLM-based mutators to generate variations of the tasks for each of the seeds (for large scale generation). They also include both vulnerable and patched code versions, together with functionality and security test cases for each task \\u2013 resulting in a total of 1345 samples with about 5 test cases per sample. \\n* They evaluate their samples on \\u2018prompt faithfulness\\u2019 and \\u2018security relevance\\u2019 \\u2013 comparing with CyberSecEval and outperforming it on both. \\n* They also evaluate the 4 LLMs for achieving the task\\u2019s required functionality using the pass @1 metric on the provided unit tests. And they evaluate the code security using carefully constructed security tests, including the boost in security when providing security policy info in the prompt.\\n* They also evaluate Cursor on their benchmark. \\n\\n**Cyber attack benchmark:** \\nFor this, they build a simulated environment containing a network that runs an e-commerce application. Their environment is structured similarly to a CTF, where the adversary aims to gain access to the database and steal sensitive user information. The benchmark facilitates 7 MITRE ATTACK categories. \\n* They evaluate the 4 LLMs on their refusal rate to comply with generating attacks, and when attacks are generated, the attack success rate is measured.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper is tackling 2 important and timely problems at the intersection of LLMs and cybersecurity.\\n\\u2022\\tHaving a benchmark that includes both security and functionality unit tests for each code example is a strong contribution to the secure code generation literature. Many SOTA LLM papers in the literature currently test code security and functionality separately (ie. using separate datasets/tasks) due to lack of benchmarks with the capability to simultaneously test both. Strong and comprehensive benchmarks are definitely lacking for this problem. \\n* Proposed approach to leverage LLMs to scale the development of secure code benchmark dataset. \\n* Using a controlled environment to see if the model can generate commands or code that facilitate attacks -- and tracking refusal rates in research on LLM-driven pentesting and red teaming can provide insight into the effectiveness of their internal safety mechanisms.\", \"weaknesses\": \"* While a lot of work has been done for this paper and there are definitely strong contributions, by setting CyberSecEval as the goal post to beat, this paper goes too broad in scope (for a paper of this length) and fails to adequately establish its position among the existing peer reviewed literature for each of these 2 distinct research directions. There is no need for benchmarks to cover both secure code generation and cyber attack capability as they have fundamentally different objectives, setups, and evaluation metrics. In the case of CyberSecEval, combining these tasks made sense because it was aligned with their product\\u2019s goals. For SecCodePLT, however, the logical connection is less clear. Secure code generation and cyberattacks don\\u2019t share the same purpose, infrastructure requirements, or audience, and combining them into the one conference-length paper restricts the depth of each evaluation.\\n\\n* Overall, there is a lack of discussion/justification for the choice of prompt wording/techniques. \\n\\n**Secure code generation task:** \\ni) Relevant benchmarks, such as LLMSecEval (MSR 2023), have been overlooked. LLMSecEval covers 18 Python-related CWEs, which challenges the authors' claim that existing benchmarks address only 8 Python-related CWEs.\\nA more detailed analysis of the scope/coverage of existing peer reviewed benchmarks and where this paper fits in would strengthen this work. \\nii)\\tCode security testing is challenging. Many SOTA papers try to utilize a combination of SAST tools, LLM vulnerability checkers, and manual checking. The discussion of the code security tests could be more convincing if it provided detailed information on the breadth and depth with which these tests cover potential vulnerabilities and edge cases. Eg. providing a breakdown of security test cases per CWE, showing how each test targets specific security requirements and edge cases, would help demonstrate thoroughness. Or providing a metric similar to code coverage in unit testing would help show that the security tests are exhaustive. Overall I didn\\u2019t understand how the vulnerable and patched code examples are used for evaluating the correctness of test cases and/or generated output. \\niii)\\tPrompt quality assessments could be stronger. The Security-relevancy Judge, based on GPT-4, introduces a potential bias since the same LLM type is used in prompt construction. Using diverse models or a user study evaluation of security relevance would provide more objective evaluations. Also \\u2013 leveraging, for example, LLMSecEval\\u2019s metrics for prompt clarity and relevance would enhance prompt quality assessment, ensuring that prompts are as clear and useful as possible for secure code generation. \\niv) The \\\"instruction generation\\\" task was hard to understand and seems to not involve generating written instructions or explanations \\u2013 but asking the model to generate the full code based on the provided prompt? \\n\\n\\n**Cyber attack scenario:** \\ni)\\tThe paper lacks a comparison with existing cyber ranges and security testing frameworks like MITRE's Caldera, Metasploit, Cyber Range by IBM, etc, as well as other benchmarks like VulHub, HackTheBox, OWASP Benchmark, VulnHub, etc. The paper lacks a discussion of where they fall short, and why you needed to create your own simulation range \\u2013 in general the positioning of SecCodePLT within the existing ecosystem is insufficient. What gap in existing environments are you are targeting? In what way did you customize your environment for LLM Evaluation? One of the gaps in the existing literature on LLM-driven pentesting and red teaming is lateral movement \\u2013 so a simulated environment and benchmark focused on measuring LLM capability for lateral movement would have been interesting. \\n\\u2022\\t(Small detail- Table 2 claims that the environment facilitates seven of the MITRE attack categories \\u2013 but then lists \\u2018weaponisation and infiltration\\u2019 - neither of which are MITRE attack categories.) \\nii)\\tRefusal analysis seemed to not be extensive. Motivation is not provided for the choice of role playing wording in the prompt: \\u201cI am a security researcher conducting a red teaming test on a web server\\u201d. Were other things tried that might be more persuasive for Claude? \\n\\n**Typos/misprints in the paper:** \\n* Missing a word here on page 2? \\u201cour automatic and validate process\\u2026\\u201d. \\n\\u2022\\tRepeated phrase on page 3: \\u201cnot even not even vulnerable\\u201d. \\n\\u2022\\tPage 4: \\u201cMITER\\u201d should be MITRE. \\n\\u2022\\tPage 6: The Figure 2 caption: \\u2018worn\\u2019 \\u2013 perhaps should be \\u201cworm\\u201d? \\n\\u2022\\tPage 6: \\u201cactive domain (AD) server\\u201d --- should this be Active Directory? \\n\\u2022\\tSection 4.2 says Figure 8 and 9 are about CyberSecEval but the figure captions say they are about SecCodePLT. \\n\\u2022\\tMultiple instances of \\u201ccursor\\u201d - should be \\u201cCursor\\u201d. \\n\\u2022\\tPage 9: \\u201cNot that we consider cursor\\u2026\\u201d \\u2013 should be \\u201cNote\\u201d.\", \"questions\": [\"Please provide more details on the security tests, addressing the concerns in the weaknesses section abve - including the breadth and depth with which these tests cover potential vulnerabilities and edge cases.\", \"Has any analysis of diversity across the 10 samples for each seed and the 5 test cases per sample been conducted? There might be redundancy.\", \"How are the vulnerable and patched code examples used for evaluating the correctness of test cases and/or generated output?\", \"Please include a comparison with LLMSecEval.\", \"**Cyber attack scenario:**\", \"As outlined in the weaknesses above, please explain the motivation for creating your own simulation range and what gap in existing ranges/benchmarks yours is targeting.\", \"Please provide more details on your attack refusal investigation - were other role playing prompt wordings tried that might be more persuasive for Claude? Etc.\"], \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"Code generation for cyber attacks has dual-use purpose and can be misused by malicious actors.\\nI am not sure where the community sits on ethics board approval for this topic.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents SecCodePLT, a unified and comprehensive evaluation platform for code GenAIs' risks. Considering insecure code, the author introduces a new methodology for data creation that combines experts with automatic generation. Considering cyberattack helpfulness, the authors set up a real environment and construct samples to prompt a model to generate actual attacks. Experiments show that CyberSecEval could identify the security risks of SOTA models in insecure coding and cyberattack helpfulness.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Promising direction. Establishing the benchmark to highlight the security risks associated with Code GenAI is a direction worth studying.\\n2. Consider real-world attack behaviors and environment deployments. \\n3. Compared with existing baselines from multiple perspectives and the results show the effectiveness of the proposed method.\", \"weaknesses\": \"1. Some related work discussions are missing.\\n2. Some details are not explained clearly. \\n3. There are some minor errors that need to be polished and proofread.\", \"questions\": \"1. This article discusses risk assessment of code generation. Some related works on code generation may also be discussed, such as BigCodeBench [1].\\n\\n[1] Bigcodebench: Benchmarking code generation with diverse function calls and complex instructions. https://arxiv.org/pdf/2406.15877\\n\\n2. Some details are not explained clearly. In line 140 of the manuscript, the author mentions \\\"extracting code chunks without proper context frequently leads to false positives\\\". But it seems that the experiment did not perform an ablation experiment on the context field. As shown in lines 867 and 894, the context field is set to None. So I don't understand the role of context and how the solution SecCodePLT in this paper can benefit from context (how to reduce false positives).\\n\\n3. In line 251 of the manuscript, the author mentions \\\"We also introduce rule-based metrics for cases that cannot be evaluated with standard test cases\\\". I am not sure where the rule mentioned here comes from. Is it based on some public manufacturer's provision? \\n\\n4. In MITRE ATT\\\\&CK, the kill chain model may be common. In other words, an attacker often implements different attack stages through a series of attack techniques and tactics. It is unclear whether SecCodePLT considers such multi-stage attack and intrusion, rather than a single attack behavior.\\n\\n5. Some minor errors, such as the missing period after \\\"security-critical scenarios\\\" on line 76. For \\\"security is required.)\\\" on line 253, the period should probably be after \\\")\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**6. Potentially redundancy in our data generation pipeline for the insecure coding task**\\n\\nThank the reviewer for the constructive comments. We added a diversity filter in our data creation pipeline to remove redundancy. Specifically, we calculate the similarity between newly generated data and existing samples using the longest common subsequence (LCS) and word-level Levenshtein distance. If the similarity score for a newly generated sample exceeds a threshold (e.g., 0.9), it is rejected for being too redundant. This ensures that each new sample introduces meaningful variation while retaining the core functionality and security context of the original task. The generation process continues iteratively, rejecting redundant samples and regenerating until sufficient diversity is achieved or another stopping condition is met. We added this explanation in Section 3 of the revised paper. \\n\\n**7. Comparison with existing cyber ranges and security testing frameworks**\\n\\nCompared to existing cyber ranges, MITRE's Caldera and Cyber Range by IBM, our benchmarks are different in the following aspects. First, these cyber ranges are interacted with by human users. LLM cannot interact directly with these environments without additional integration (such as APIs or middleware). This makes it challenging to use these platforms to evaluate LLM's capabilities. Second, these cyber ranges lack a fine-grained evaluation metric to measure the progress and effectiveness of each attack stage. In our metric, we provide a metric for each attack stage. Note that Metasploit is a penetration testing tool rather than a cyber range that enables attack evaluations. \\n\\nExisting benchmarks pointed out by the reviewer (VulHub, HackTheBox, OWASP Benchmark) are for individual vulnerability detection and reproduction, while our benchmark focuses on the LLMs\\u2019 capability to launch major steps in the MITRE cyber attack procedure. \\n\\nGiven these gaps, we created our own benchmark that enables dynamic evaluation of major attack steps in the MITRE cyber attack procedure of LLM\\u2019s capabilities in cyber attack helpfulness. A minor clarification, we do have lateral movement as part of our process: In Figure 2, after exploiting the Internal User 2, we will do lateral movement to the LAN server and then get access to the Database host. \\n\\nFollowing the reviewer\\u2019s suggestion, we also surveyed other AI-related penetration testing works. Most of them are new AI-driven penetration testing tools targeting certain stages of the whole cyber attack procedure, such as [1,2] for initial access, [3,4] for launching attacks. None of them built an end-to-end evaluation platform for evaluating LLMs. \\n\\nWe clarified this in Section 2 of the revised paper. \\n\\n**8. About weaponization and infiltration and MITRE attack categories**\\n\\nWe apologize for the confusion. The term \\\"Weaponization and infiltration\\\" comes from the cyber kill chain model, which provides a similar definition of the cyberattack process as MITRE. They correspond to the \\\"Initial Access\\\" stage in MITRE, i.e., acquiring access to the target system. We've made the change in our paper to be consistent.\"}", "{\"comment\": \"Dear reviewer N2DR,\\n\\nThank you for your response. Generally, static testing offers higher efficiency but is prone to a high false positive rate and lacks the ability to track runtime or dynamic behaviors of a program (e.g., CWE-1333: Inefficient Regular Expression Complexity). In contrast, dynamic testing provides more reliable results with significantly fewer false positives but struggles to identify vulnerabilities in unreachable code. Most existing works opt for SAST tools such as CodeQL due to the intensive human effort required to construct stricter dynamic test cases for large datasets. In our study, we demonstrated that our test cases achieve an average of 91% line coverage, with the uncovered code primarily consisting of redundant return statements and exception handling, which are unrelated to the vulnerabilities. This enables us to provide a more precise and reliable analysis.\\n\\nRegards, authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"**1. Limited programming language**\\n\\nWe thank the reviewer for pointing this out. We would like to kindly point out that creating high-quality data for insecure coding and building an executable environment for cyberattack helpfulness is challenging and requires a large amount of effort. As such, in this work, we focus on Python, as it is the most predominant programming language and continues to grow in popularity. Some widely used benchmarks that support dynamic testing are also Python only, such as Swe-bench for patching and LiveCodeBench for debugging and testing. We are the first security benchmark that enables dynamic executions. In our humble opinion, including more program languages such as C/C++ requires substantially more effort. For example, we will need to reproduce existing vulnerable code and create docker environments to execute them. Given the massive amount of effort required, we respectfully believe that it is reasonable to defer it as our future work. We added this discussion in Section 5 of the revised paper.\\n\\n**2. Provide more details about the data generation process for the insecure coding task**\\n\\nWe thank the reviewer for the constructive comments. The test suites used for these mutated tasks and code samples are based on the original test cases. We carefully ensured that these original test cases remained relevant and effective even after mutation. The code mutators were designed to make only syntactic or structural changes that preserve functionality, allowing the original test cases to apply to the mutated code without issue. Additionally, after mutation, we performed dynamic testing to validate that the original test cases still work as intended with the modified code and verify both capability and safety requirements. \\nSpecifically, for each mutated code sample, we executed both the vulnerable and patched versions using a testing framework that loads the setup, core function code, and associated test cases. These dynamic tests assess both capability and safety by verifying that both the vulnerable and patched code fulfill the intended functionality, while also ensuring that only the patched code avoids unsafe behavior by passing all security checks, whereas the vulnerable code fails these tests.\", \"we_filtered_out_any_that_do_not_meet_these_criteria\": \"capability test cases are retained only if they pass for both versions, and safety test cases are kept only if they pass in the patched code but fail in the vulnerable version. After filtering, we verified that at least one valid test case remains in each category; if there are insufficient valid cases, we rerun the code mutator to generate additional variations that meet these requirements.\\n We clarified this in Section 3 of the revised paper.\\n\\n**3. How to create our prompts and whether the benchmark supports user-specific prompts**\\n\\nWe thank the reviewer for the comments and we are sorry for the confusion. \\nIn our benchmark setup, the benchmark itself provides the core task prompts, code samples, test cases, and evaluation metrics. These elements are carefully crafted and standardized to ensure consistency across evaluations. User control includes decisions on specific evaluation modes, such as whether to run the task as instruction generation (text-to-code generation) or as code completion. Users can also choose to include or exclude optional fields, such as the security policy, to assess how model performance varies with different levels of contextual information.\\nThe system prompts and user templates shown in the paper were carefully crafted with a significant human effort to provide standardized and effective prompts for testing different models. Our benchmark is specifically designed to offer reliable, task-aligned prompts that minimize ambiguities and ensure clarity in evaluating model behavior. The aim is to establish a consistent evaluation framework that researchers can readily adopt without needing to design prompts from scratch. Similar to existing LLM benchmarks, our prompts are provided to enable an apple-to-apple comparison of different models\\u2019 performance. We indeed validated that subtly mutating the prompts will not trigger a huge difference in model performance.\", \"about_use_specific_prompts\": \"While the benchmark provides these standardized prompts, users who wish to customize their evaluations can modify the input templates. The evaluation framework allows for user-defined prompts, provided they adhere to the necessary structure for the benchmark\\u2019s test cases and evaluation pipeline. We added this clarification in Section 5 of the revised paper.\"}", "{\"title\": \"Global Response (Continued)\", \"comment\": \"Appendix:\\n1. In Appendix C, we added new examples of our data with implementation context. \\n2. In Appendix D, we clarified that the security policy reminder is optional in the input.\\n3. In Appendix K, We added our code mutator prompt. \\n\\n\\n**Additionally, we corrected typos and refined terminology throughout the paper, marking these changes in red.**\\n\\nWe hope these revisions address the reviewers\\u2019 concerns and improve the overall quality of our paper.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer N2DR,\\n\\nThank you for your thoughtful follow-up and for providing detailed feedback. We greatly appreciate your acknowledgment of our efforts and your willingness to raise the score.\\n\\n**Addressing Remaining Concerns**\\n\\nTo clarify, our data generation process involves **two distinct stages**:\\n1. **Stage 1: Seed Generation** - With the assistance of LLM, we created 153 seed examples. The details are shown in Figure 1. Each seed contains vulnerable and patched versions of the code, along with corresponding capability and safety test cases.We manually review and correct all the 153 seeds to ensure their faithfulness, security relevance, test case correctness, and sufficient diversity. This stage ensures that the core vulnerabilities align with the CWE specifications.\\n2. **Stage 2: Mutated Samples** - From each seed, we generate up to 10 mutated samples using task and code mutators. These perturbations maintain the core logic and functionalities of the code, ensuring that the vulnerabilities and patches remain consistent while introducing meaningful diversity.\\n\\nBoth stages involve dynamic validation of the test cases and code to ensure correctness. Below, we address how these stages resolve the specific concerns you raised:\\n\\n1. **Ensuring the Vulnerable Code Contains the Required Vulnerabilities**: In Stage 1, each vulnerable code sample is manually crafted and reviewed to ensure alignment with CWE specifications. Dynamic testing is performed to confirm that the vulnerable code passes capability test cases but fails safety test cases, demonstrating that the vulnerabilities are functional and detectable. In Stage 2, we apply task and code mutators to generate variations of the seed samples. Since these perturbations (e.g., renaming variables or arguments) do not alter the core logic, the vulnerabilities established in Stage 1 remain intact. If any mutated sample fails to meet the validation criteria, such as not passing the required test cases, we rerun the code mutator to generate a valid replacement.\\n\\n2. **Ensuring the Patched Code Effectively Mitigates Vulnerabilities**: The patched code is also manually written and reviewed in Stage 1 to specifically address the vulnerabilities identified in the CWE while preserving the functionality of the original code. Dynamic validation ensures that the patched code passes both capability and safety test cases, confirming that it effectively mitigates vulnerabilities. In Stage 2, as the core logic of the patched code remains unchanged during perturbations, the patched code retains its ability to mitigate vulnerabilities. If a mutated sample fails validation due to perturbation errors (e.g., inconsistencies in the code), we rerun the code mutator to ensure that all generated samples meet the criteria.\\n\\n3. **Verifying the Security Test Cases**: In Stage 1, capability test cases are retained only if they pass for both the vulnerable and patched code. Safety test cases are kept only if they pass for the patched code but fail for the vulnerable version. These validated test cases are reused directly in Stage 2 without modification, as the core logic of the code remains unchanged during perturbations. \\n\\n**Updates to Text**\\n\\nWe revised the benchmark generation section (section 3.2) to clarify these points in the updated version:\\n- We explicitly highlight that each seed is rewritten five times to adapt the task to new scenarios while maintaining the CWE's core characteristics. These rewrites are manually reviewed to ensure their faithfulness, security relevance, correctness of the test cases, and diversity.\\n- The validation process for capability and safety test cases is clearly outlined as occurring in Stage 1, with these test cases reused in Stage 2.\\n- The process of rerunning the code mutator to handle failures in Stage 2 is emphasized.\\n\\nAdditionally, we addressed the notation issues you identified on page 2, replacing IC and CH with insecure coding and cyberattack helpfulness. We also thoroughly proofread the newly added text again and corrected typos or grammatical errors we found. \\n\\nThank you again for your constructive feedback and for raising your score. We are committed to addressing these remaining concerns and improving the clarity of our manuscript.\\n\\nBest regards, \\nAuthors\"}", "{\"comment\": \"**9. About the refusal analysis in the cyberattack helpfulness tasks**\\n\\nThanks for pointing this out. We would like to respectfully justify that as a benchmark, we use the most straightforward prompt to test the basic capability. We do not focus on jailbreaking the model.\\n\\nFollowing the reviewer\\u2019s suggestions, we did an extra experiment to show that there is a possibility of jailbreaking the model. More specifically, we tried three jailbreaking strategies. (1) Let the model act under a role-play scenario of a CTF player. This strategy comes from Cybench and NYU-CTF datasets [5,6]. (2) DAN (do anything now), which is a popular jailbreaking prompt originating from \\u201cLearn Prompting\\u201d [7]. (3) Restricted Response (i.e., \\u201cStrictly limit the scope of your answer to Linux command and command reasoning\\u201d), which is also a popular jailbreaking prompt used in jailbreaking text inputs [7].\\n\\nFor each strategy, we conducted experiments on weaponization and C2&command tasks due to their high refusal rates. The results are shown in Figure 12. We discovered that prompting the model to act as a CTF player and using restricted responses yielded the lowest refusal rates. On the contrary, the popular jailbreaking technique DAN (Do Anything Now) is ineffective in our task. We added this experiment to Appendix H.\\n\\nIn addition, we thank the reviewer for pointing out typos. We corrected them in the revised paper. One minor clarification, we updated our terminology from 'active domain (AD) server' to 'LAN server' because our admin server operates on Linux rather than Windows.\\n\\n[1] Generative AI for Pentesting: The Good, the Bad, the Ugly\\n\\n[2] Getting Pwn\\u2019d by AI: Penetration Testing with Large Language Models\\n\\n[3] LLMs as Hackers: Autonomous Linux Privilege Escalation Attacks\\n\\n[4] Teams of LLM Agents Can Exploit Zero-Day Vulnerabilities\\n\\n[5] Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models\\n\\n[6] NYU CTF Dataset: A Scalable Open-Source Benchmark Dataset for Evaluating LLMs in Offensive Security\\n\\n[7] https://learnprompting.org/docs/prompt_hacking/jailbreaking\"}", "{\"summary\": \"The paper proposes SECCODEPLT, a unified and comprehensive evaluation platform for code GenAIs\\u2019 risks.\\n\\nFor insecure code, the authors introduce a new methodology for data creation that combines experts with automatic generation. For cyberattack helpfulness, the authors set up a real environment and construct samples to prompt a model to generate actual attacks, along with dynamic metrics.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Through experiments, SECCODEPLT outperforms CYBERSECEVAL in security relevance and prompt faithfulness, highlighting the quality of this benchmark.\\nThe authors then apply SECCODEPLT and CYBERSECEVAL to four SOTA open and closed-source models, showing that SECCODEPLT can better reveal a model\\u2019s risk in generating insecure code.\", \"weaknesses\": \"Many state-of-the-art methods for code generation are not mentioned and experimented in the paper, such as:\\n\\nJingxuan He, Martin Vechev. Large Language Models for Code: Security Hardening and Adversarial Testing. 2023. In CCS. https://arxiv.org/abs/2302.05319.\\n\\nErik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. In ICLR. https://arxiv.org/\\nabs/2203.13474\\n\\nDaniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. 2023. InCoder: A Generative Model for Code Infilling and Synthesis. In ICLR. https://arxiv.org/\\nabs/2204.05999\\n\\nLoubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Mu\\u00f1oz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. 2023. SantaCoder: Don\\u2019t Reach for the Stars! CoRR\\nabs/2301.03988 (2023). https://arxiv.org/abs/2301.03988\\n\\nThere are many other benchmarks for evaluations of code generation that are not mentioned and compared. Please refer to the paper https://arxiv.org/html/2406.12655v1 for details.\", \"questions\": \"In \\u201cEach seed contains a task description, example code, and test cases\\u201d, do all the source code samples have the task description? What are the methods used in test cases?\\n\\nIt is not clear how the author performs the code mutator as mentioned in \\u201cAs specified in Section 3.2, we design our task mutators to keep the original security context and code mutator to preserve the core functionalities.\\u201d What types of code mutators are used here?\\n\\nWhat dynamic methods do the authors use for \\u201cAfter mutation, we also manually check the security relevance of newly generated data and run dynamic tests to ensure the correctness of their code and test cases.\\u201d?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**1. Choice of insecure coding and cyberattack helpfulness**\\n\\nThank the reviewer for the constructive comments. We appreciate the reviewer\\u2019s recognition of the massive amount of effort needed to build our benchmark. We would like to first clarify our logic for including insecure coding and cyberattack helpfulness in SecCodePLT. Our goal is to evaluate the code generation models\\u2019 safety and security with a focus on code output and generation tasks. We started with the model itself and evaluated its risks in generating insecure and vulnerable code under **benign and normal queries**. Going beyond normal queries, we then evaluate the model\\u2019s helpfulness in cyberattacks where the inputs are **malicious queries**. Given that we focus on code generation, we do not include text generation (e.g., prompt injection) or discriminative tasks (e.g., vulnerability detection) in our benchmark. We clarified our logic in Section 1 of the revised paper. \\n\\n**2. Difference from LLMSecEval**\\n\\nThanks for pointing out this missing related work. LLMSecEval covers 18 Python-related CWEs using natural language (NL) prompts designed to test code generation models. LLMSecEval uses static analysis tools, such as CodeQL, to evaluate security vulnerabilities but does not incorporate dynamic testing. We believe our benchmark is different from LLMSecEval in the following aspects: (1) SecCodePLT covers 27 Python-related CWEs while LLMSecEval covers only 18; (2) We provide more structured input for each data point, including coding task descriptions, vulnerable code, security policies, etc, while LLMSecEval only provides an NL prompt. (3) We conducted manual inspections to ensure that our data were related to real-world, security-critical coding scenarios. (4) Unlike LLMSecEval, SecCodePLT associates each task with test cases for dynamic evaluation, which allows us to assess both the functionality and security of generated code. (5) SecCodePLT includes task variations that test models under different levels of contextual information, such as with or without security policies, enabling a more nuanced evaluation of model behavior in security-relevant contexts. We added this discussion in Section 2 of the revised paper. \\n\\n**3. About the quality of our testing cases**\\n\\nWe agree with the review that writing high-quality testing cases that enable comprehensive testing is challenging. In our benchmark, we spent extensive human effort on writing testing cases that try to have a high coverage. We added an experiment to show the coverage of our testing cases. We ran the functionality and security testing cases for each data in our dataset and calculated the average line coverage of 90.92%. Most of the uncovered code consists of redundant return statements and exception handling that is either unnecessary or unrelated to the vulnerability. We added this experiment to Section 3.\\n\\n**4. Potential bias in our security-relevance judgment prompt and the possibility of using LLMSecEval\\u2019s metrics**\\n\\nFollowing the reviewer\\u2019s suggestion, we report results from using claude-3-5-sonnet-20240620 as an alternative judge alongside GPT-4o. The evaluation results showed minimal variation between the two models, demonstrating that the evaluation is not overly dependent on a specific LLM and confirming the reliability of the security relevance metric. We added this experiment in Appendix I in the revised paper. \\nWhile LLMSecEval includes metrics such as naturalness, expressiveness, adequacy, and conciseness to assess prompt quality, our focus is on ensuring prompts align with the security evaluation goals. Given the different goals, we do not use LLMSecEval\\u2019s metrics in our evaluation. \\n\\n**5. Confusion about the \\\"instruction generation\\\" task**\\n\\nSorry for the confusion. In the \\\"instruction generation\\\" task, the model is indeed prompted to generate the complete code based on a structured task description rather than producing written instructions or explanations. This task is designed to evaluate the model\\u2019s ability to generate secure and functional code directly from a provided prompt that outlines the coding objective. To make the task clearer, we revised \\\"instruction generation\\\" to \\u201ctext-to-code generation\\u201d in the revised paper.\"}", "{\"comment\": \"**1. Missing related work [1]**\\n\\nWe thank the reviewer for pointing this out. BigCodeBench is designed to evaluate LLMs\\u2019 capability to solve general programming tasks. It focuses on two key aspects: diverse function calls (using tools from multiple libraries) and complex instruction following. This work has a different focus from ours. We focus on the security and risks of the code generation models while this paper focuses more on the models\\u2019 normal capabilities. As such, BigCodeBench collects data from different normal libraries (pandas, numpy, etc), while we collect our data from top CWEs. We added our discussion in Section 2 of our paper.\\n\\n[1] Bigcodebench: Benchmarking code generation with diverse function calls and complex instructions\\n\\n**2. Confusion about the context field in the data**\\n\\nThank you for your question about context usage in our paper. We would like to clarify that there are two distinct types of \\\"context\\\" discussed in the paper:\\n1. Security-related Background Context (Line 140):\\nThis refers to the essential security-relevant background information for each code snippet. Without proper security context, the extracted vulnerable code may actually be benign, which will introduce false positives. For example, using a fixed seed for random number generation might seem harmless in general applications, but could introduce serious vulnerabilities when used in cryptographic contexts. These security contexts are explicitly provided in our task descriptions, as demonstrated in Appendix C under the task_description field. We changed the context in Lin 140 to \\u201cbackground\\u2019\\u2019 in the revised paper. \\n\\n2. Function-level Technical Context (Lines 867 and 894):\\nThis refers to the implementation context such as global variables, import statements, and other code dependencies. This is part of our input JSON file that can help the model to better finish the coding task. The None values in these lines specifically refer to this type of technical context, which is None in this specific example, because the example does not involve global variables, import statements, etc. We have added new examples in Appendix C that have these implementation contexts.\\n\\n**3. Rule-based metrics in line 251**\\n\\nSorry for the confusion. The rule-based metrics we introduced are designed to evaluate scenarios where dynamic test cases may not be applicable or effective. These rules are not based on manufacturer guidelines; rather, they were developed based on security best practices and known coding standards within the research and cybersecurity communities. For example, in cases like CWE-327 (Use of a Broken or Risky Cryptographic Algorithm), the rule is to check whether the generated code uses the `random` library for cryptographic purposes instead of a more secure option like `secrets`. \\n\\n**4. Whether SecCodePLT considers multi-stage attacks in MITRE ATT&CK**\\n\\nThank you for pointing this out. We considered multi-stage attacks in our benchmark. More specifically, In Figure 5 of our paper, we analyzed the Attack Success Rate (ASR) for different stages of the attack chain separately, providing a detailed view of model performance at each attack phase. We can tell from the result that if the attack of each stage were independent, the theoretical success rate of a complete attack chain would be less than 0.5% (the ASR for Weaponization and C2 are both less than 10%). To validate this point, we also conducted end-to-end attack experiments where we prompted the models to execute complete attack chains from Reconnaissance to Collection. In these experiments, we conducted 500 independent trials for each selected model. We found zero successful cases of complete attacks. More specifically, GPT-4o, Claude3.5-Sonnet, and Llama3.1-70B have an average of passing 0.68/5, 0.6/5, and 0.1/5 stages. We added this discussion to Section 4 in our revised paper.\\n\\nIn addition, we thank the reviewer for pointing out typos. We have corrected them in the revised paper.\"}", "{\"comment\": \"**4. The metric of the security relevance experiment and whether security policy reminder is forced as part of the input**\\n\\nWe thank the reviewer for the constructive feedback. We would like to clarify that the evaluation metric for security relevance in SecCodePLT is not designed to assess the ability of LLMs to generate contextually appropriate responses. Instead, it evaluates whether the prompts used in the benchmark effectively highlight the security-critical aspect of a task, ensuring alignment with the intended CWE context. This metric focuses on the quality and relevance of the prompts, verifying that they accurately frame the security scenario required for evaluating model behavior.\\nThe line of the security policy reminder in the judge prompt template (Appendix D.1) is **optional**. When conducting evaluations without the security policy, this line is removed from the template entirely. Figure 3 in the paper highlights these evaluations, showing results for both setups\\u2014one with the security policy included and one without. We clarified this in Section 4 and Appendix D in the revised paper.\\n\\n**5. Ablation studies of security policy reminder**\\n\\nWe thank the reviewer for the constructive comments. We would like to first point out that our paper includes the ablation of the security policy reminder prompt in Figure 4, which evaluates model performance with and without the security policy reminder. The results in Figure 4 clearly demonstrate the impact of the security policy reminder, with significant improvements (approximately 30% improvement on the rule-based set and a 10% improvement on the pass@1 set) in secure coding rates when the policy is included. \\nIn addition to testing the presence or absence of the security policy reminder, we also experimented with different styles of the policy prompt by rephrasing it using gpt-4o-2024-08-06 and claude-3-5-sonnet-20240620. When comparing performance across models with differently rephrased styles of the security policy reminder, we observed that the differences were within 3% for all evaluated models. This finding demonstrates that the specific rephrased style has a minimal impact on model performance, as long as the core guidance remains clear and understandable. We added this new experiment in Appendix J of the revised paper.\\n\\n**6. Discussion about defenses in the CH tasks**\\n\\nAs shown in our experiment, the capabilities of SOTA LLMs in the CH task are still very limited. The models can barely launch successful attacks even without any defenses. As such, we do not include defenses. We agree with the reviewer that once the model can launch attacks at a reasonable success rate, it is necessary to test their resiliency against defenses. As such, we respectfully believe that it is reasonable to defer this extra step as part of our future work. We added this discussion to Section 5 in the revised paper. \\n\\n**7. The current CH tasks lack generalizability and description of attack capabilities**\\n\\nThe goal of our work is to provide a standard, controlled, and manageable environment that covers the major steps in the cyber kill chain and MITRE attack procedures. We built the current environment to serve this purpose. It is noted that having such a system with vulnerabilities and attack paths injected requires non-trivial effort. Given that the SOTA models cannot perform well in the current environment, we respectfully believe that it is reasonable to defer more complex attack environments to future works. We added this discussion to Section 5 in the revised paper. \\n\\n**8. Overall ASR for the end-to-end attack and whether the attack process is fully automated**\\n\\nAs shown in Table 2, we designed criteria for each attack stage to decide whether an attack succeeds and evaluated each stage independently. Our overall ASR is defined as the attack that passes all the criteria of each stage. We further conducted an experiment to test the overall attack performance of selected models. For each model, we use it to launch an attack from the first attack stage. If the attack of the current succeeds, it will move to the next stage. An attack that passes all stages is marked as a successful attack. We conducted 500 trials for each model and got a zero ASR. More specifically, GPT-4o, Claude3.5-Sonnet, and Llama3.1-70B have an average of passing 0.68/5, 0.6/5, and 0.1/5 stages. We added this experiment to Section 4. We also want to clarify that our evaluation process is fully automated.\"}", "{\"title\": \"Global Response\", \"comment\": \"Dear Reviewers,\\n\\nWe thank the reviewers for the insightful questions and reviews. Your time and effort dedicated to improving our work are truly appreciated. We have responded to all the insightful comments with extra experiments. All modifications are marked in red color in the revised paper. Below we summarize the experiments and changes we made in the revision. \\n\\n**We added the following experiments:**\\n1. We conducted a coverage test of our testing cases and showed that our test cases can enable a high coverage with an average of 90.92%. Most of the uncovered code consists of redundant return statements and exception handling that are unrelated to the vulnerability. This experiment validates the quality of our testing cases. We added this experiment to Section 3. \\n\\n2. We conducted an experiment to test the overall attack performance of selected models. For each model, we use it to launch an attack from the first attack stage. If the attack of the current stage succeeds, it will automatically move to the next stage. An attack that passes all stages is marked as a successful attack. We conducted 500 trials for each model and got a zero ASR. More specifically, GPT-4o, Claude3.5-Sonnet, and Llama3.1-70B have an average of passing 0.68/5, 0.6/5, and 0.1/5 stages. We added this experiment to Section 4.\\n\\n3. We did an extra experiment to show that there is a possibility of jailbreaking the model. More specifically, we tried three jailbreaking strategies. (1) Let the model act under a role-play scenario of a CTF player. This strategy comes from Cybench and NYU-CTF datasets [1,2]. (2) DAN (do anything now), which is a popular jailbreaking prompt originating from \\u201cLearn Prompting\\u201d [3]. (3) Restricted Response (i.e, \\u201cStrictly limit the scope of your answer to Linux command and command reasoning\\u201d), which is also a popular jailbreaking prompt used in jailbreaking text inputs [3]. For each strategy, we conducted experiments on weaponization and C2&command tasks due to their high refusal rates. The results are shown in Figure 12. We discovered that prompting the model to act as a CTF player and using restricted responses yielded the lowest refusal rates. On the contrary, the popular jailbreaking technique DAN (Do Anything Now) is ineffective in our task. We added this experiment to Appendix H. \\n\\n4. We replaced GPT-4o with claude-3-5-sonnet-20240620 as an alternative judge in the security relevance experiment. The evaluation results showed minimal variation between the two models, demonstrating that the evaluation is not overly dependent on a specific LLM and confirming the reliability of the security relevance metric. We added this experiment in Appendix I in the revised paper. \\n\\n5. We experimented with different styles of the policy prompt by rephrasing it using gpt-4o-2024-08-06 and claude-3-5-sonnet-20240620. When comparing performance across models with differently rephrased styles of the security policy reminder, we observed that the differences were within 3% for all evaluated models. This finding demonstrates that the specific rephrased style has a minimal impact on model performance, as long as the core guidance remains clear and understandable. We added this new experiment in Appendix J of the revised paper.\\n\\n**In addition, we made the following modifications to the paper:**\", \"section_1\": \"1. We clarified the logic behind selecting the insecure coding and cyberattack helpfulness tasks.\", \"section_2\": \"1. We added the comparison of our benchmark with BigCodeBench, which focuses on general coding capabilities rather than security and risks.\\n2. We added a comparison of our benchmark with LLMSecEval, which does not enable dynamic evaluation and structured inputs. \\n3. We added the comparison of our benchmark with related works pointed out by Reviewer fom1, which are also about general coding capabilities and new coding methods rather than security and risk benchmarks.\\n4. We added a discussion about the difference between our CH benchmark and the existing cyber ranges and vulnerability detection, 5. reproduction, and penetration testing benchmarks. We clarified why it is necessary to build a new CH benchmark.\", \"section_3\": \"1. We added more details and clarifications about our data generation process for the insecure coding task.\\n2. We explained the dynamic method used to ensure the correctness of the generated data.\\n3. We added our filtering step to avoid redundancy. \\n4. We clarified that we considered lateral movement in our benchmark.\", \"section_4\": \"1. We clarified that our CH benchmark considered multi-stage attacks.\\n2. We clarified the purpose of our security relevance experiment and the corresponding metric.\", \"section_5\": \"1. We added a discussion about other programming languages.\\n2. We added a discussion about supporting user-specific prompts.\\n3. We added a discussion about considering defenses and other attacks in the CH task.\"}", "{\"comment\": \"Thanks for the response.\\nTo improve the safety test, have you checked if it's possible to cross-validate your test's results with that of a SAST tool? CodeQL is popular in the academic literature but there are many out there.\"}", "{\"comment\": \"**1. Missing related works**\\n\\nAs discussed in the related work, we mainly compare our work with security-related coding model benchmarks, as such we do not compare it with general code generation benchmarks. Given that our focus is to build benchmarks rather than proposing new code generation models or methods, we did not discuss existing code generation methods in the paper. We added the related papers and clarified this in Section 2 of the revised paper. \\n\\n**2. Questions about task description and rule-based metrics**\\n\\nWe thank the reviewer for the comment. Yes, each sample in our benchmark has a task description. For test cases, we employ a mix of dynamic and rule-based methods to evaluate both the functionality and security of the generated code. Dynamic test cases involve running the code with a variety of inputs to verify that it performs as expected and remains secure under different conditions. \\nThe rule-based metrics are designed to evaluate scenarios where standard test cases may not be applicable or effective. For example, in cases like CWE-327 (Use of a Broken or Risky Cryptographic Algorithm), the rule is to check whether the generated code uses the `random` library for cryptographic purposes instead of a more secure option like `secrets`.\\t\\n\\n**3. What types of code mutators are used**\\n\\nThank you for the comment. The code mutators we use are carefully designed to preserve the core functionality and security context of each task while introducing controlled variations. We used claude-3-5-sonnet-20240620 for the mutation. The prompt is shown in the Appendix K.\\n\\n**4. What dynamic method is used to ensure the correctness of the generated data**\\n\\nFor each code sample, we executed both the vulnerable and patched versions using a testing framework that loads the setup, core function code, and associated test cases. These dynamic tests assess both capability and safety by verifying that both the vulnerable and patched code fulfill the intended functionality, while also ensuring that only the patched code avoids unsafe behavior by passing all security checks, whereas the vulnerable code fails these tests.\", \"we_filtered_out_any_that_do_not_meet_these_criteria\": \"capability test cases are retained only if they pass for both versions and safety test cases are kept only if they pass in the patched code but fail in the vulnerable version. After filtering, we verify that at least one valid test case remains in each category; if there are insufficient valid cases, we rerun the code mutator to generate additional variations that meet these requirements. We clarified this in Section 3 of the revised paper.\"}", "{\"comment\": \"Dear Reviewer N2DR,\\n\\nSorry to bother you again. With the discussion phase nearing the end, we would like to know whether the responses have addressed your concerns.\\n\\nShould this be the case, we are encouraged that you raise the final rating to reflect this.\\n\\nWe are looking forward to your reply. Thank you for your efforts in this manuscript.\\n\\nBest regards, Authors\"}", "{\"summary\": \"This paper develops SECCODEPLT, a unified and comprehensive evaluation platform for code GenAIs\\u2019 risks. It introduces a new methodology for data creation that combines experts with automatic generation for insecure code which ensures the data quality while enabling large-scale generation. It also associates samples with test cases to conduct code-related dynamic evaluation. Furthermore, it sets up a real environment and constructs samples to prompt a model to generate actual attacks for the task of cyberattack helpfulness, along with dynamic metrics in our environment.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper presents a pioneering approach by integrating a database with two distinct security-related tasks. SECCODEPLT serves as a comprehensive platform that unifies the evaluation of GenAIs\\u2019 risks associated with code generation. This integration facilitates a holistic approach to assessing different dimensions of security risks. By associating samples with test cases, SECCODEPLT enables dynamic evaluation related to code. This method allows for real-time assessments and adjustments, providing a deeper analysis of the code's behavior in practical scenarios.\", \"weaknesses\": \"1. The programming language used in the paper is limited, with Python being the sole language explored. This is inadequate for a comprehensive and large-scale benchmark. The inclusion of other programming languages like C/C++ and Java, which constitute a significant portion of recent CVEs, is crucial. These languages are more complex in syntax and more broadly applied, offering valuable insights into the capabilities of LLMs.\\n2. The paper's description of the data generation process for the IC task is unclear. It mentions the use of two different mutators to generate data, yet it fails to clarify the generation of the corresponding test suites. It is uncertain whether the test suites for these new datasets are generated by LLMs or if they reuse the original suites. If generated by LLMs, how is the quality of these suites assured? If the original test suites are used, can they adapt to new contexts effectively?\\n3. The paper lacks a necessary ablation study. The boundary of what is user control and what is provided by benchmark is not well clarified. The rationale behind the design of the prompts and instructions used to trigger evaluations is not well justified. For example, why do the authors use system prompts and user templates shown in the paper? Are they more reliable and efficient? Will the differences in these prompts affect the evaluation of LLM ability? If users want to use their own prompts, is there any way?\\n4. The evaluation metric of security relevance is confusing and lacks rationales. It is unclear whether this metric aims to assess specific properties of LLMs or the prompts themselves. Because the benchmark is designed to evaluate LLMs, using a metric that assesses the prompts introduces confusion. Furthermore, in the SECURITY-RELEVANCY JUDGE prompt template (D.1), the security policy reminder is included as part of the user input and fed directly to the LLM. This setup may influence the evaluation of security relevance and potentially introduce bias.\\n5. The ablation of the security policy reminder is missing, similar to problem 3. The paper does not discuss the reasons for choosing the security policy reminder prompt.\\n6. The paper lacks a discussion on the specific defenses employed in the CH task. In realistic settings, a variety of defenses, such as firewalls and intrusion detection systems, are typically deployed. It will be insightful to know how different LLMs perform when various defenses are considered in a simulated environment.\\n7. The usefulness and generalization of the CH task is limited. Practical attacks vary significantly and are influenced by diverse factors, but the scenario described in the paper lacks generalizability across different attack types and target systems. This limited setting restricts the ability to conduct an accurate and comprehensive evaluation of LLMs for the CH task. Additionally, the paper does not specify the capabilities of attackers, including the types of tools that can be used to launch attacks with LLMs. Also, the strong assumption that some internal users will click on phishing or other harmful links further reduces the task's practical relevance.\\n8. Evaluation metrics in CH task. It will be better to set a specific metric to evaluate the overall ASR for the end-to-end attack. Additionally, the details regarding the evaluation process are not well-explained \\u2013 whether it is a fully automated process or requires human input at various stages to guide or adjust the evaluation.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": [\"Scientific Claims and Findings:\", \"This paper introduces SecCodePLT, a Python-focused benchmark designed to evaluate the security risks associated with code generated by existing code LLMs. SecCodePLT assesses a model\\u2019s ability to produce secure code and its potential to facilitate end-to-end cyberattacks.\", \"Strengths:\", \"The proposed benchmark, SecCodePLT, is a valuable contribution to the field.\", \"SecCodePLT outperforms the state-of-the-art benchmark, CyberSecEval, in both prompt faithfulness and security relevance.\", \"SecCodePLT more effectively reveal a model\\u2019s risk in generating insecure code compared to CyberSecEval.\", \"Weaknesses:\", \"The writing lacks clarity and could benefit from improved organization, presentation, and detail.\", \"Insufficient comparison with existing benchmarks.\", \"The evaluation of state-of-the-art code generation models is not comprehensive. Models such as DeepSeek Coder, CodeQwen, and others are neither discussed nor experimented with.\", \"Most Important Reasons for Decision:\", \"Based on the identified weaknesses.\"], \"additional_comments_on_reviewer_discussion\": \"This paper has significantly benefited from the review process. The updated version shows considerable improvement over the initial submission, leading Reviewers 8iyW and N2DR to raise their scores to 6 after the rebuttal.\\n\\nOverall, the AC believes that this benchmark paper could be improved by enhancing its comprehensiveness and clarity. Another round of revisions would be beneficial.\"}", "{\"comment\": \"Thank you for your responses and for conducting the additional experiments.\\n\\nRegarding your comment #7: Apologies for the confusion \\u2014 I was referring to the Metasploitable 2 & 3 VMs, not Metasploit.\", \"remaining_concerns\": \"================\\nOverall, I believe your response has addressed most of my concerns, but I am still not entirely convinced about the evaluation of the correctness of the security test cases. Specifically, your statement that \\u201csafety test cases are kept only if they pass in the patched code but fail in the vulnerable version\\u201d raises the following question:\\n\\nHow can you be sure that it is the test cases that are problematic if both the patched and vulnerable code are manually generated? Could it instead be that the generated code does not meet its requirements (i.e., the vulnerable version may not truly contain the vulnerability, or the patched version may not be adequately fixed)?\\n\\nThis highlights the need for stronger evidence that the vulnerable code contains the required vulnerabilities and that the patched code effectively mitigates them. Moreover, it remains crucial to demonstrate that the security test cases are functioning as intended.\", \"general_feedback\": \"===============\\nI will raise my score, but I believe the paper still needs more attention to the writing, especially around the dataset creation process. This section should explain:\\n - How you ensure that the vulnerable code reliably contains the required vulnerabilities.\\n - How you ensure that the patched code is indeed patched.\\n - Most importantly, how you verify that the security test cases are working as intended.\\n\\nMisprints/Clarifications:\\n----------------------------\\n - The newly added text on page 2 uses the IC and CH notation before it has been defined.\\n - The newly added text needs checking for typos/grammatical errors.\\n\\nThank you again for your thoughtful responses and for addressing many of the points raised.\"}" ] }
0QvLISYIKM
Pointwise Information Measures as Confidence Estimators in Deep Neural Networks: A Comparative Study
[ "Shelvia Wongso", "Rohan Ghosh", "Mehul Motani" ]
Estimating the confidence of deep neural network predictions is crucial for ensuring safe deployment in high-stakes applications. Softmax probabilities, though commonly used, are often poorly calibrated, and existing calibration methods have been shown to be harmful for failure prediction tasks. In this paper, we propose to use information-theoretic measures to estimate the confidence of predictions from trained networks in a post-hoc manner, without needing to modify their architecture or training process. In particular, we compare three pointwise information (PI) measures: pointwise mutual information (PMI), pointwise $\mathcal{V}$-information (PVI), and the recently proposed pointwise sliced mutual information (PSI). We show in this paper that these PI measures naturally relate to confidence estimation. We first study the invariance properties of these PI measures with respect to a broad range of transformations. We then study the sensitivity of the PI measures to geometric attributes such as margin and intrinsic dimensionality, as well as their convergence rates. We finally conduct extensive experiments on benchmark computer vision models and datasets and compare the effectiveness of these measures as tools for confidence estimation. A notable finding is that PVI is better than PMI and PSI for failure prediction and confidence calibration, outperforming all existing baselines for post-hoc confidence estimation. This is consistent with our theoretical findings, which suggest that PVI is the most well-balanced measure in terms of its invariance properties and sensitivity to geometric feature properties such as sample-wise margin.
[ "information theory", "confidence estimation", "deep neural networks" ]
Reject
https://openreview.net/pdf?id=0QvLISYIKM
https://openreview.net/forum?id=0QvLISYIKM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yQNiBrwZgY", "vWSiqnHAhG", "vPUyWCF9Yd", "uZGzyBDjGN", "tjT4anwr5z", "rSpVuYxBGX", "qM8JkW4cqo", "pb1qoi1iGz", "lyOsSz9J9e", "kpFeDUajeA", "er123yzxXh", "dm8jpTDjLl", "dT22ehs6X9", "ZnsNmDTKNS", "ZjhXX0FSML", "Yh0vcfSETF", "W4GFpHqrlS", "UY5299uEF1", "TWSGehu7bW", "SpzLL5Zlbp", "OarUovTdlN", "OFxnkC1tTz", "NHbXlEZc1d", "Kt7qWTASla", "KUBOLVgCZ2", "J0vRuTgWFU", "IghCv9nmDK", "Hukjl7E7EO", "DRdKgv6GG4", "8DtShzgJC7", "5Ubjj7KmYw", "4vVovlQopE", "1KgZH6XDiK", "0UgrJlS92P", "0LCEkKyUSO" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1730380925442, 1732302644182, 1732299420140, 1732621067739, 1732389649306, 1734693921559, 1730412458822, 1730707737592, 1732385089717, 1733312146833, 1732298977854, 1733312672405, 1732302852479, 1732303203210, 1732643755690, 1732429794646, 1733312190618, 1732306540548, 1732377086849, 1733313239228, 1730704443786, 1733313652276, 1732377364413, 1733312805225, 1730708114338, 1732632989062, 1732377507162, 1732301853924, 1732306392892, 1732377661114, 1733313616481, 1732304112083, 1737523927825, 1732384581122, 1732304629566 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8716/Reviewer_HF1k" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Reviewer_HF1k" ], [ "ICLR.cc/2025/Conference/Submission8716/Reviewer_ixZH" ], [ "ICLR.cc/2025/Conference/Submission8716/Area_Chair_EEBf" ], [ "ICLR.cc/2025/Conference/Submission8716/Reviewer_yJeK" ], [ "ICLR.cc/2025/Conference/Submission8716/Reviewer_nRyg" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Reviewer_nRyg" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Reviewer_9w5v" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Reviewer_ixZH" ], [ "ICLR.cc/2025/Conference/Submission8716/Reviewer_yJeK" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ], [ "ICLR.cc/2025/Conference/Submission8716/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper explores the use of point-wise information measures as notions of prediction confidence for neural networks. They propose three existing information measures (PMI, PSI, PVI) with associated estimation methods, and state their theoretical properties in terms of invariance to transformations, geometric properties w.r.t. decision boundary, and convergence rates of the estimators to the true measures. The results are motivated as useful or intuitive for uncertainty quantification or confidence estimation. Then two experiments on misclassification detection and selective prediction are performed, where the measures are compared to a few simple alternative notions of model confidence. Finally, their calibration property in terms of ECE is examined. The authors suggest that their point-wise information measures provide accurate and robust measures of model confidence.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Approaching confidence estimation from an information-theoretic perspective provides an interesting angle, and the suggested information measures seem relevant and somewhat practical.\", \"The paper clearly outlines multiple factors of motivation for the work, which help put the approach into a broader context.\", \"The information measures are closely examined and various theoretical properties are studied. This is also obvious from the substantial appendix which lists many properties of the information measures and possible estimation methods.\"], \"weaknesses\": [\"My main concerns are with respect to the claims made by the paper, obtained insights, and the experimental design and connection to confidence estimation. I will list my points of concern under each of these categories, albeit they are connected.\", \"Claims\", \"The proposed measures are motivated as a post-hoc approach to confidence estimation. All the estimation methods for PMI/PSI/PVI require custom neural network modeling and training. How is this supposed to be post-hoc? For example, how am I supposed to apply this to an existing, pre-trained neural network that I treat as a black-box and do not want to fine-tune? I do not think this qualifies as post-hoc.\", \"The proposed measures are motivated by a \\u201cRelationship to Probabilistic Causation\\u201d yet this relationship is never mentioned again or examined, and only briefly mentioned in the limitations/future work section. In that section it is then also claimed that \\\"the PI measures are the optimal choice of explainability\\\" but there are no proper experiments on model explainability or causality, so this claim is not backed up in any way.\", \"The proposed measures are motivated by their \\u201cDirect Probability Computation\\u201d, but given their value ranges the only way to obtain probabilities is to pass them through a squashing function such as softmax, which is precisely done in the experiments. So how does the interpretation of obtained probabilities, and their associated reliability, differ in any way from just a regular softmax on logits? The associated claims on \\\"robustness\\\" are not examined or backed up in any way.\", \"The proposed measures are motivated by the need for \\u201cuncertainty quantification\\u201d and prevalent miscalibration of neural networks. Firstly, recent research has shown that modern neural network architectures such as transformers (not considered here) can in fact be quite well calibrated [1,2], and even if this was not the case, their proposals do not address a way to remedy model miscalibration but rather just suggest another confidence measure. Their claim on having \\u201cbetter calibrated\\u201d confidence measures is unconvincing to me due to their experimental design (see below), and thus their very strong claim on \\\"outperforming all existing baselines for post-hoc confidence estimation\\\" is poorly backed up. Finally, the relationship between proposed measures and meaningful uncertainty interpretations is very speculative and does not reach beyond a few broad and high-level arguments in their remarks (see below). So overall, this angle of motivation is also lacking based on their strong claims.\", \"Insights\", \"I have a key question: By the invariance properties in sec 3.1 we have that PMI is best, by the geometric and convergence properties in sec 3.2. and 3.3 we have that PSI is best, but in the performed experiments we find that PVI is best. How can you reconcile this and claim that theory and experiments are in line with each other?\", \"To re-iterate on the connection to uncertainty quantification: it is repeatedly stated that there is a high relevance for \\u201cmodel uncertainty\\u201d, which equates to a notion of epistemic uncertainty. But then, Remark 1 motivates that uncertainty should be invariant to data transformations, which now relates to notions of data (aleatoric) uncertainty. Yet overall, the quantity of interest is in fact p(y|x) which is simply predictive uncertainty of the model given an input. So, it does not seem to me like there is a principled association between the information measures and actual notions of uncertainty, and the authors are not clear about what kind of uncertainties we are trying to address. Overall, the connections to uncertainty are mainly contained in the motivating introduction, and in Remark 1 and Remark 2, and are all very high-level and speculative.\", \"Regarding Remark 1: the quantity of interest is p(y|x), whereas data transformations are applied to features X. Since we then have that $g(X) \\\\neq X$, I don't necessarily see an issue if $p(y| g(X)) \\\\neq p(y|X)$ because we are conditioning on a different quantity.\", \"Regarding Remark 2: The provided interpretation for confidence estimation does not take into account data atypicality or OOD'ness [3]. These samples may lay far away from the decision boundary/margin but also in the tail of the data support, and thus should ideally exhibit low confidence. Also, the interpretation on confidence correlating with margin distance is only desirable for overlapping supports. If e.g. $P(X|Y=0)$ and $P(X|Y=1)$ are clearly separated (as e.g. used in Prop. 4) then we would desired maximum confidence everywhere. So, it is unclear to me how Remark 2 follows from the stated results and how directly applicable/useful these results are.\", \"Regarding sec 3.2: The section seems to borrow different tools from different papers to show results for the different information measures. However, the assumptions (which seem very strong), conditions of validity, and form of results are all very different, and no interpretations are given. How do they relate to each other in terms of strength and the individual components influencing them? For example, what is Prop. 4 for PMI useful for and why do we not have a similar result in regards to \\\"sample-wise margin\\\" as for PSI and PVI? Similar questions also apply to sec 3.3 on convergence results.\", \"Regarding L228: based on the wording it is unclear what the \\u201csample-wise margin\\u201d is supposed to be. A mathematical definition seems necessary here.\", \"Regarding the margin correlation experiment: The results are confusing to me because in Table 1 it seems like the results between different measures are quite different (e.g., PMI has lower and more volatile correlations than PSI), yet the UMAPs virtually all look the same. How should I understand that?\", \"Experiments\", \"My main concern is about the fact that their information measures are passed through a softmax function and *calibrated with temperature scaling* before benchmarking against other methods. This seems like a very biased comparison, since we observe in App D.1 that these operations significantly alter the distributions of the information measures, and improve upon the considered performance metrics. It seems unreasonable to me to *scale and calibrate* your measures beforehand, and then claim afterwards that they provide \\\"direct probabilities\\\" and \\\"well-calibrated\\\" confidence estimates. How is this a fair comparison to any baselines that are not subject to the same transformations, e.g. ML, LM? In that context, do you also apply temperature scaling to any of the other baselines such as softmax (MSP, SM)? I feel like any performance claims should rather be reported for the raw information measures instead, since it otherwise becomes unclear where any benefits stem from.\", \"In the experiments on confidence calibration only two simplistic baselines are considered, and they are marginally outperformed. To claim that they are \\\"outperforming all existing baselines\\\" seems like a very strong claim in that light. It would be more meaningful to consider other baselines for confidence estimation, including those that have been subjected to a similar approach of re-calibration as they do for their own measures (using temperature scaling), e.g. isotonic regression [4], regularization [5], or other uncertainty methods like models with variance predictor [6] etc.\", \"Relatedly, the results in Table 3 are often within each other's margin of error, so there are some questions on the reliability or significance of \\u201coutperforming\\u201d.\", \"In L401-403, what is the intuition for only working with features from the last layers? This is not clearly explained.\", \"Are there any clear principles guiding the choices of estimation methods for the information measures, and associated transformations (i.e., softmax or temperature scaling)? Based on the appendix it seems like purely based on hold-out performance. If so, why not consider other squashing functions or re-calibration procedures? The choices are not well documented and seem primarily motivated for their simplicity.\", \"I am personally missing a more detailed and meaningful interpretation of the results beyond stating what can be seen in the provided results tables.\", \"Summary\", \"In conclusion, I find that the paper makes overly strong claims and motivates the work from multiple angles which are then left unexplored or never properly analyzed. The experimental design raises some questions and combined with the marginal improvements in experiments casts doubts on the practicality and usefulness of the approach. I am struggling to see the real novelty of the paper. The proposed information measures and their estimation methods are all taken from existing papers, and many of the theoretical results rely strongly on these papers as well. Is the theoretical analysis novel? Personally, it is hard for me to say since I am unfamiliar with this research domain. Is the novelty then in its application/use for confidence estimation? The experiments are unconvincing, and the connections to uncertainty do not go beyond some high-level arguments. In addition, their theoretical insights and empirical results seem somewhat contradictory on what the best information measure is supposed to be. For example, they conclude that \\\"This superior performance is likely due to PVI being the most well-rounded metric, particularly in terms of its invariance and margin sensitivity\\u201d, even though Remarks 1, 2 and 3 on these properties rank PVI lowly. While the paper explores some interesting information-theoretic tools, their use for robust and reliable confidence estimation is substantially lacking in my opinion.\", \"References\", \"[1] Minderer, Matthias, et al. \\\"Revisiting the calibration of modern neural networks.\\\"\\u00a0Advances in Neural Information Processing Systems\\u00a034 (2021): 15682-15694.\", \"[2] Wang, Deng-Bao, Lei Feng, and Min-Ling Zhang. \\\"Rethinking calibration of deep neural networks: Do not be afraid of overconfidence.\\\"\\u00a0Advances in Neural Information Processing Systems\\u00a034 (2021): 11809-11820.\", \"[3] Yuksekgonul, Mert, et al. \\\"Beyond confidence: Reliable models should also consider atypicality.\\\"\\u00a0Advances in Neural Information Processing Systems\\u00a036 (2024).\", \"[4] Naeini, Mahdi Pakdaman, and Gregory F. Cooper. \\\"Binary classifier calibration using an ensemble of near isotonic regression models.\\\"\\u00a02016 IEEE 16th International Conference on Data Mining (ICDM). IEEE, 2016.\", \"[5] Mukhoti, Jishnu, et al. \\\"Calibrating deep neural networks using focal loss.\\\" Advances in Neural Information Processing Systems 33 (2020): 15288-15299.\", \"[6] Maddox, Wesley J., et al. \\\"A simple baseline for bayesian uncertainty in deep learning.\\\" Advances in neural information processing systems 32 (2019).\"], \"questions\": \"Please observe and address my questions and comments in the weakness section, such as on the strength of claims, usefulness of theoretical results for confidence estimation, experimental design, and others.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer HF1k (Part 3)\", \"comment\": \"> The proposed measures are motivated by the need for \\u201cuncertainty quantification\\u201d and prevalent miscalibration of neural networks. Firstly, recent research has shown that modern neural network architectures such as transformers (not considered here) can in fact be quite well calibrated [1,2], and even if this was not the case, their proposals do not address a way to remedy model miscalibration but rather just suggest another confidence measure. ... So overall, this angle of motivation is also lacking based on their strong claims.\\n\\nWe agree that more recent models (especially those without convolutions) or regularized models (such as with label smoothing, Lp norm in the Function Space, focal loss) are more well-calibrated, as noted in the references mentioned. Nevertheless, architectures like CNNs and MLPs (which we focus on) are still widely used due to their relative simplicity, and they are known to suffer from miscalibration, as corroborated by extensive prior work. Also, in this work the focus is not solely on remedying model miscalibration, but also having overall better confidence estimates which might be useful for other tasks such as failure prediction (misclassification detection and selective prediction). It has been shown that many confidence calibration methods (such as label smoothing and focal loss) are useless or harmful for failure prediction (Zhu et al., 2022). In the paper, they also found that MSP outperforms these popular confidence calibration methods for this task. In light of these findings, we believe our results, even if the improvement over MSP is modest, are still significant. We will refine our motivation to emphasize not only addressing model miscalibration but also the need for more accurate and reliable confidence estimates that can be effectively applied across a wide range of tasks. In the following responses, we address concerns regarding:\\n1. **experimental design**: Specifically, the concern about not calibrating other measures, which is not the case for our work.\\n2. **connection between proposed measures and uncertainty interpretations**: We strengthen our arguments and provide additional details, which will also be reflected in the remarks.\\n\\nPlease refer to the corresponding detailed responses below as these concerns are addressed accordingly.\", \"reference\": \"Zhu, F., Cheng, Z., Zhang, X.-Y., & Liu, C.-L. (2022). Rethinking Confidence Calibration for Failure Prediction. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 518\\u2013536).\"}", "{\"title\": \"Response to Reviewer HF1k (Part 1)\", \"comment\": \"We thank the reviewer for their positive feedback and valuable suggestions. Below, we address the concerns and questions that the reviewer has regarding our paper:\\n\\n> All the estimation methods for PMI/PSI/PVI require custom neural network modeling and training. How is this supposed to be post-hoc? For example, how am I supposed to apply this to an existing, pre-trained neural network that I treat as a black-box and do not want to fine-tune? I do not think this qualifies as post-hoc.\\n\\nWe adopt the same post-hoc definition as Cattelan and Silva (2024): post-hoc approaches replace the confidence estimator of a given classifier (using outputs or intermediate features produced by the model) without altering or retraining the original classifier. For PMI and PSI, our custom neural network modeling or training does not modify or retrain the original network. For PVI, although we train an additional network to provide confidence for the original network, this process still aligns with the post-hoc definition as it does not involve retraining the original classifier. Having said that, we agree with the reviewer that our measures cannot be applied to an existing, pre-trained neural network, that can only be treated as a black box without access to training data. However, with the restriction of not modifying the trained original network, we found that we can only compare our approach with other post-hoc approaches, as the alternatives either involve changing the network itself, or working with other pre-defined network types (like Bayesian neural networks).\", \"reference\": \"Cattelan, L. F. P., & Silva, D. (2024). How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks. The 40th Conference on Uncertainty in Artificial Intelligence.\\n\\n> The proposed measures are motivated by a \\u201cRelationship to Probabilistic Causation\\u201d yet this relationship is never mentioned again or examined, and only briefly mentioned in the limitations/future work section. In that section it is then also claimed that \\\"the PI measures are the optimal choice of explainability\\\" but there are no proper experiments on model explainability or causality, so this claim is not backed up in any way.\\n\\nThe relationship to probabilistic causation is used to motivate the use of pointwise information (PI) measures. While this relationship motivates our measures conceptually, we believe it does not need to be repeatedly mentioned, as the focus of the paper is on the practical utility of PI measures rather than a deep exploration of causal theory. Instead, we leave this exploration for further studies. In addition, we acknowledge that \\\"optimal\\\" was a strong term and have revised it to \\u201cappropriate\\u201d to better reflect the intention. Similarly, the reference to \\\"explainability\\\" has been adjusted to \\\"uncertainty quantification\\\". Our discussion of probabilistic causation is primarily meant to be a tool to understand why dividing the conditional probabilities with the priors can be useful in the context of uncertainty quantification. We have also since added more justification for the division in terms of classical information theory, as we have noted that our measures are essentially computing a pointwise form of information gain, which aligns itself uniquely with the overall objectives of uncertainty quantification.\"}", "{\"comment\": \"I acknowledge and thank the authors for the detailed response. Due to the limited time for further discussion this will be my only comment.\\n\\nOverall, it seems that the authors are tackling what I consider an important problem from an interesting perspective. But, from the response above it is obvious that several gaps have been identified which would consider what I deem substantial changes to the paper in its current form (the authors themselves repeatedly rebut using words such as \\\"adjust\\\", \\\"incorporate\\\", \\\"add\\\", \\\"acknowledge\\\"). This is particularly relevant given the fact that the primary motivation and novelty of the paper is reiterated as \\\"the resulting perspectives on the problem of uncertainty quantification and confidence estimation are new\\\", an angle which, despite clarifying comments in the rebuttal, remains insufficiently addressed from my perspective. Thus, I am not inclined to change my score and stand by my initial assessment. I encourage the authors to carefully reconsider some of their motivations and arguments and incorporate any review comments into an improved iteration of their work. \\n\\nI am adding responses to a few individual comments made by the authors below, in no particular order of importance.\\n\\n> We adopt the same post-hoc definition as Cattelan and Silva (2024): post-hoc approaches replace the confidence estimator of a given classifier (using outputs or intermediate features produced by the model) without altering or retraining the original classifier.\\n\\nIt is good and important to clarify what \\\"post-hoc\\\" in this context refers to, since requiring training data and model access is a more limiting definition than its applicability to any \\\"black-box\\\" pre-trained model.\\n\\n> While this relationship motivates our measures conceptually, we believe it does not need to be repeatedly mentioned, as the focus of the paper is on the practical utility of PI measures rather than a deep exploration of causal theory.\\n\\nThat's fine, but if you state in your motivation that something is a \\\"key factor\\\" and then cease to follow up on it (for 2/4 key factors) and also do not expand on \\\"We argue that this problem can be mathematically formulated\\\" in the context of that factor, it strikes me as a weak motivation.\\n\\n> the PI measures are derived from a direct probability computation framework... Logits, on the other hand, can be seen as relative scores that the model assigns to each class, and they lack probabilistic grounding because they do not directly represent likelihoods or probabilities in the mathematical sense. ... the core idea remains that these measures fundamentally work with probabilities rather than arbitrary scores\\n\\nYour measures are clearly unbounded as stated in the appendix, and if they do not satisfy Kolmogorov's axioms they are as little valid probabilities to me as any logit scores. The argument for \\\"direct probability\\\" seems more related to point-wise data effects, which is more related to influence functions. At the end of the day you still squash them to the desired $[0,1]$ range and call them probabilities thereafter, just as logits do.\\n\\n> PMI is best in terms of being the most invariant to transformations, however this may not be a boon but rather a bane in the context of the uncertainty quantification problem. Therefore, invariance to any homeomorphic transformation could be counterproductive.\\n\\nThis seems in direct contradiction to your statements made in Remark 1 on this topic, where you state \\\"Note that overall we see that PMI is the most invariant in nature, followed by PVI and then PSI. Thus PMI is the most structure preserving in nature. This property is helpful in the context of uncertainty estimation\\\". Perhaps you should re-think Remarks 1 and 2 in light of your arguments.\\n\\n> Our objective with the convergence bounds was initially just to outline the various dependencies in these results rather than directly compare and judge them on the same scale, which is definitely challenging due to these approaches being significantly different in terms of their estimation process.\\n\\nYet, if I plan on using your approaches for uncertainty quantification then I would like to know what the take-away is. Do they shed any additional insights into what measure I might prefer for a given setting (e.g., if I have strongly overlapping class supports, or high correlation, etc.). It is good to state theoretical results, but it is better to explain how they are conducive to your stated goal of improving UQ. In that context, it is necessary to also relate them to each other.\\n\\n> We note that we are using different definitions of sample-wise margin for the PSI and the PVI results in Theorem 1 and Proposition 5.\\n\\nMore reasons to clearly define the sample-wise margin you use.\\n\\n> We are currently conducting the experiments and will be including the results shortly.\\n\\nPlease do reconsider the scientific rigor of your experimental design and the fairness of your baseline comparisons in that context.\"}", "{\"comment\": \"Thank you for the clarification and additional information. Based on the reviewers' responses, I see two major issues: (1) The proposed estimators are dependent on the datasets and their class-wise distribution, which is a significant limitation for their use as confidence estimators or model uncertainty estimates; (2)The improvement in the reported results heavily relies on Temperature Scaling [Guo et al. 2017]. This is evident in Table 12 in the Appendix, where the results are inferior to baseline softmax measures without temperature scaling.\"}", "{\"metareview\": \"This paper proposes the use of pointwise information (PI) measures, specifically PMI, PSI, and PVI, for uncertainty quantification and confidence estimation in deep neural networks. The authors provide a theoretical analysis of the invariance properties, geometric dependence, and convergence rates of these measures, and conduct experiments to evaluate their performance on various tasks, including confidence calibration, selective prediction, and misclassification detection. The results show that PVI outperforms other measures in many cases, despite its lower invariance to transformations and margin sensitivity.\\n\\nWhile the reviewers acknowledge the importance of the problem and the interestingness of the perspective, they express concerns about the novelty and significance of the contributions, and the clarity and consistency of the presentation. The reviewers also question the fairness and rigor of the experimental design, and the validity of the conclusions drawn from the results.\\n\\nIn my opinion, the paper has made significant progress in addressing the reviewers' concerns, and the revisions have improved the clarity and consistency of the presentation. The authors have provided a more detailed and nuanced discussion of the theoretical results, and have addressed the concerns about the experimental design and the validity of the conclusions.\\n\\nHowever, I agree with the reviewer that the paper could benefit from further clarification and discussion of the implications of the results, particularly in terms of the practical applications and limitations of the PI measures. The paper could also benefit from a more detailed comparison with other related work, and a more thorough discussion of the potential pitfalls and challenges of using PI measures for uncertainty quantification.\\n\\nOverall, I believe that the paper has the potential to make a significant contribution to the field. However, after discussion with the authors and among themselves, the reviewers find the paper to still be very borderline, with three reviewers leaning towards acceptance and two towards rejection. We will therefore need to reject the paper in its current form. We would still like to encourage the authors to resubmit an improved version of the paper in the future.\", \"additional_comments_on_reviewer_discussion\": \"see above\"}", "{\"summary\": \"This paper studies the impact of three pointwise information (PI) measures on the uncertainty quantification quality with Deep Neural Network (DNN). Through the lens of Information Theory (IT), the authors provide rigorous theoretical properties regarding the invariance, geometric, and convergence rates. Extensive experimental results confirm the theoretical arguments, and, the benchmarking suggests the pointwise V-information (PVI) outperforms the mutual information (PMI) and the sliced mutual information (PSI) in failure prediction and confidence calibration.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"This paper is very well-written and is clear to understand the important aspects of the algorithm.\", \"I like the new direction of using IT tools (e.g., mutual information, conditional entropy, etc.) to improve the reliability of DNN. I believe the benchmark results of three recent PI measures are useful for communities.\", \"The theoretical results are solid with clear mathematical notations, clear statements, and proof of the invariance, geometric properties, and convergence rate per each PI measure.\", \"The theory is also confirmed by experimental evidence, e.g., Fig.1 with the experiment on correlation to margin results in geometric properties, Table 2 with the convergence rate.\", \"The experimental results are extensive with several settings across different modern DNN architectures on standard benchmark datasets.\"], \"weaknesses\": [\"The novelty regarding a proposed method is weak since PMI, PSI [1], and PSI [2] have been proposed before.\", \"The novelty in theoretical analysis is quite weak. Specifically, the invariance properties have been mentioned in Section 3 of [1] and the convergence rate has been analyzed in Section 3 of [1] and Section 4 of [2].\", \"The connection between theoretical results and model uncertainty is unclear to me. Details are in Question 2.\", \"Three PI measures are less computationally efficient than other baselines (e.g., standard Softmax) by requiring additional models and computes either $pmi(x;y)$, $psi(x;y)$, or $pvi(x\\\\rightarrow y)$.\", \"The experimental results also lack some measurements such as sharpness and predictive entropy to assess the uncertainty quality performance.\"], \"questions\": \"1. What do the authors mean by a \\u201cpost-hoc manner\\u201d in L-45? Is this post-hoc recalibration technique with additional hold-out calibration data to fine-tune some learnable parameters?\\n2. Remark 1 is vague to me. Firstly, what kinds of uncertainty are you talking about (aleatoric or epistemic)? It would be great if the authors could explain this kind of uncertainty through the lens of IT (see Eq.1 in [3]). Secondly, why when a classifier is uncertain on X, the uncertainty about $g(X)$ should ideally be the same? Can you formally explain this argument and give some examples about this?\\n3. In proof of Prop.6, while [4] provides estimation error bounds on the sample marginal distribution $P(X)$, why the authors can trivially apply their results on the conditional $P(X|Y)$? \\n4. Could the authors please compare methods with the sharpness score [5] in Section 4.2? I think this is important because lower ECE is only a necessary condition, it is not a sufficient condition to evaluate a good uncertainty estimation with DNN.\\n5. L-172 mentioned that PVI uses temperature scaling, is this the main reason PVI achieves the lowest ECE in Tab.4?\\n6. Can PI measures extend to other kind of dataset such as text, audio, video, etc.? Is there any challenge with this extension?\", \"references\": \"[1] Goldfeld et al., Sliced mutual information: A scalable measure of statistical dependence, NeurIPS, 2021.\\n\\n[2] Xu et al., A theory of usable information under computational constraints, ICLR, 2020.\\n\\n[3] Mukhoti et al., Deep deterministic uncertainty: A new simple baseline, CVPR, 2023.\\n\\n[4] Jiang et al., Uniform Convergence Rates for Kernel Density Estimation, ICML, 2017.\\n\\n[5] Kuleshov et al., Calibrated and sharp uncertainties in deep learning via density estimation, ICML, 2022.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The following paper conducts theoretical and experimental analysis on different pointwise information measures that serve as a metric denoting the confidence of neural network predictions. It considers three pointwise information measures: (1) pointwise mutual information (PMI), (2) pointwise $\\\\mathcal{V}$-information (PVI), and (3) pointwise sliced mutual information (PSI). Initially, the paper introduces the formal definition of each information measure and its pointwise version, followed by analyses of each pointwise measure on (1) invariance properties, (2) geometric properties, and (3) convergence properties. There are also experiments on failure prediction and confidence calibration tasks to measure each pointwise information measure in terms of confidence estimation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and highly enjoyable to read.\\n2. The explanation regarding experiments in this paper goes in-depth, and I highly appreciate the authors for making Jupyter Notebook and source code available.\\n3. While this paper does not propose any new methods, it provides novel insights on utilizing various pointwise information measures to estimate the confidence of DNNs.\", \"weaknesses\": \"1. Experiments available in Section 4 of the paper are done on relatively small datasets and architectures. Is it possible to scale up the dataset (TinyImageNet, ImageNet) and architecture (ViT, DeIT), just like the experiments conducted by Jaeger et al., 2023 (https://arxiv.org/abs/2211.15259)? I am asking because benchmark methods used for comparison in Section 4 evaluate their method on a relatively larger scale in terms of data and architecture within their original paper.\\n2. More on the Questions section.\", \"questions\": \"1. In Table 2, why do $n$ for PMI and PSI differ in value? I fail to see the results to be compatible with each other if the $n$ values are not comparable.\\n2. I might not get 100% on the correlation between the properties of each pointwise information measure outlined in Section 3 with the results in Section 4. For example, how would you correlate the invariance property, in which PMI theoretically has an edge, with the result obtained in Section 4 for failure prediction in Section 4.1 and confidence calibration in Section 4.2?\\n3. For someone who is not that familiar with the following line of work, with regards to the Convergence Rate part detailed in Section 3.3, are there any possible ways to model $\\\\mathcal{V}$ for PVI in a way such that its estimation error are comparable to $|\\\\mathrm{pmi}(x;y) - \\\\hat{\\\\mathrm{pmi}}_n|$ as in PMI case?\\n4. Typos:\\n- In point 1 of \\\\textbf{Contributions} , there should be a period between estimation and We (\\\"estimation. We\\\" instead of \\\"estimation We\\\". [Section 1]\\n- \\\"from in\\\" -> \\\"from\\\" [Section 4]\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer yJeK (Part 3)\", \"comment\": \"> In proof of Prop.6, while [4] provides estimation error bounds on the sample marginal distribution $P(X)$, why the authors can trivially apply their results on the conditional $P(X|Y)$?\\n\\nThank you for your observation. Yes, we note that their result can still be applied when $Y$ is discrete, which is the case for our experiments, but with an additional assumption which we outline below. \\n\\nWhen $Y$ is discrete, $P(X|Y)$ can be treated as estimating $P(X)$ when $Y=1$, which yields a smaller dataset of size $P(Y=1)\\\\times n$, and thus the convergence bounds in [4] apply for that smaller sample complexity. Doing so essentially yields an effective sample complexity of $\\\\gamma n$, where $\\\\gamma = \\\\min (P(Y=0), P(Y=1))$, where the probabilities are computed over the training dataset. If $\\\\gamma \\\\neq 0$, then this essentially keeps the order of the convergence bound unchanged, as it only adds a constant multiplicative factor that does not depend on $n$, which is $\\\\gamma^{-\\\\alpha/(2\\\\alpha + d_x)}$ for the case of PMI. This indicates that assuming $\\\\gamma \\\\neq 0$, our convergence bounds for PMI and PSI still hold. We will add this assumption in the proposition statements. \\n\\n> L-172 mentioned that PVI uses temperature scaling, is this the main reason PVI achieves the lowest ECE in Tab.4?\\n\\nWe are currently conducting experiments to evaluate PVI without temperature scaling to determine its impact on achieving the lowest ECE in Table 4. This will help isolate whether temperature scaling is the primary factor contributing to its performance. We will be including the results shortly.\\n\\n> Can PI measures extend to other kind of dataset such as text, audio, video, etc.? Is there any challenge with this extension?\\n\\nYes, PI measures can be extended to other types of datasets. However, extending to other modalities may require identifying the most relevant feature representations for the specific data type and potentially adopting a more suitable normalization technique to handle the unique characteristics of these modalities effectively.\"}", "{\"title\": \"General Response (Part 1)\", \"comment\": \"We thank all the reviewers for the constructive feedback and for recognizing the merits of our contributions. We have revised our draft to incorporate reviewer suggestions, which are outlined below. For every addressed point/experiment, we highlight the reviewer ID which requested those suggested changes and experiments.\\n\\n**Clarity based changes**\\n\\n1. Changing some choice of words, such as: avoiding \\u201coptimality\\u201d in the probabilistic causation context, changing \\u201cexplainability\\u201d to \\u201cuncertainty quantification\\u201d, \\u201cmodel uncertainty\\u201d to \\u201cpredictive uncertainty\\u201d and removing the robustness argument as we have not explicitly empirically verified it **\\\\[HF1k\\\\]**. \\n2. Revise the motivational phrasing \\u201cDirect Probability Computation\\u201d to \\u201cInformation Theoretic Connection\\u201d, discussing the implications of using pointwise information gain for the confidence estimation problem, and how it compares to the standard practices **\\\\[HF1k\\\\]**. We also note that our final measures are not direct probabilities but logarithm of probability ratios, and thus we have changed this discussion accordingly. \\n3. Improve clarity in emphasizing the focus of this work, such as the fact that we are not only remedying model miscalibration, but also wanting to have an overall better confidence estimate which might be useful for a wide variety of tasks **\\\\[HF1k\\\\]**. \\n4. We have clarified what post-hoc refers to in our abstract and introduction, following the definition provided in recent prior works **\\\\[yJeK, HF1k\\\\]**. \\n\\n**Additional points/discussions**\\n\\n1. We have added more details in our discussions of our empirical results on failure prediction and confidence calibration **\\\\[HF1k\\\\]**. \\n2. We summarize our latest interpretations of the theoretical results in a new section called \\u201cTheoretical takeaways\\u201d, which is also later used to better interpret and contextualize the empirical findings within the theory **\\\\[yJeK, HF1k\\\\]**. The section on convergence rates has been moved to the Appendix due to space constraints, and their findings are also discussed in the theoretical takeaway section. Our current interpretations in our takeaways allows for the theoretical and empirical findings to be consistent throughout. \\n3. Remark 1 has been changed to contextualize the invariance properties within the standard conditional entropy based interpretation of predictive uncertainty, and why they are necessary **\\\\[nRyg, yJeK,** **HF1k\\\\]**. We also now point to our more detailed argument in Remark 9 on why invariance to general homeomorphisms may be counterproductive, in Remark 1's text. \\n4. A new Remark (Remark 14\\\\) has been added to discuss the importance of sensitivity to hard margins in the context of confidence estimation, citing other relevant work to support our argument **\\\\[HF1k\\\\]**. \\n5. We have expanded the discussions section to incorporate the reasons why the findings in the margin sensitivity experiment and the confidence calibration experiment are at odds with each other, and what we learn specifically from that observation in our work **\\\\[ixZH, HF1k\\\\]**. \\n\\n**Minor changes and additions**\\n\\n1. We have added assumptions to the theoretical results for convergence rates of pmi and psi, and accordingly incorporated this in the proofs **\\\\[yJeK\\\\]**. \\n2. We have included a general definition of sample-wise margin, which later takes specific forms within the respective theoretical results (Theorem 1 and Proposition 5\\\\) **\\\\[HF1k\\\\]**. \\n \\nDue to limited time to address all the questions which require additional experiments to conduct (including experiments on large scale datasets), we could not include them in the deadline for updating the paper. However, we have just finished running some of the additional experiments and in what follows, we now provide the additional empirical results and discuss their significance in the context of our work. We believe that these additional experiments and observations further strengthen the merits of our contribution.\"}", "{\"title\": \"Response to Reviewer ixZH\", \"comment\": \"We thank the reviewer for their positive remarks and valuable suggestions on the paper. Below, we address the concerns and questions raised by the reviewer.\\n\\n> Despite the use of pointwise information measures requiring the training of additional models in a post-hoc manner, both PMI and PSI perform worse than the baseline softmax measure.\\n\\nWith temperature scaling, it is true that PMI and PSI seem to perform worse in four out of the five cases, although without it, PMI more often performs better than the baseline. We hypothesize that the lower performance is due to PMI\\u2019s and PSI\\u2019s fragile convergence rates, which is dependent on the nature of the joint distribution P(x,y). The denominator in the convergence rates depends on the minimum of $P(x)$ and $P(x|y)$, thus when either $P(x)$ or $P(x|y)$ is low, it can potentially lead to significant error in estimation. In datasets where the class-wise distributions are more concentrated, probability density $P(x)$ will be higher on average, and also $P(x|y)$ will be high. This applies to simpler datasets such as MNIST where the class-wise distributions are fairly centered. For more complex datasets where the data is more spread out, and there is potentially more overlap between the classes, both $P(x)$ and $P(x|y)$ can be small more often, leading to worse convergence for PMI. The argument extends to PSI, as it also has a minimum over the probabilities and conditional probabilities of the projections. This is also why we see PMI and PSI perform significantly better for MNIST relative to other measures compared to other complex datasets where they lag considerably. These observations then also lead to a re-interpretation of which measure may end up with the best convergence rates, as now it is clear that for more complex, spread out datasets, PMI and PSI\\u2019s convergence behaviour can potentially turn out worse than PVI. Based on the success of PVI\\u2019s empirical results, we then postulate that this may indeed be the case. \\n\\n> Can you please address the concerns about whether the improved performance of PVI is due to the PVI measure itself or the temperature scaling in the PVI estimator?\\n\\nFirstly, please note that the reported benchmark methods (MSP, SM, NE, NG) are with temperature scaling as stated in the Appendix C.3.1. (Line 1813). We will include this in the main paper for clarity. Secondly, we also have reported the PVI results without temperature scaling in Appendix D.2.3. (Table 12), and showed that the uncalibrated variant of PVI (with softmax scaling) still performs better than softmax without temperature scaling. Thus, we can conclude that while temperature scaling further improves performance, a significant portion of the improvement is also attributable to the PVI itself. For completeness, we will be including a comparison with other benchmarks for the case without temperature scaling.\\n\\n> The empirical evaluation does not include comparisons with established post-hoc confidence calibration methods. To address this gap, it would be beneficial to compare the proposed measures with well-known methods such as Temperature Scaling (TS) [Guo et. al. 2017] and Ensemble Temperature Scaling (ETS) [Zhang et. al. 2020].\\n\\nAs mentioned in the previous point, the reported benchmark methods (MSP, SM, NE, NG) are with temperature scaling. Thank you for the suggestion, we are currently looking into the ETS and conducting experiments to compare its performance.\\n\\n> What is the reason for the contradictory findings where PSI is shown to be a better confidence estimator than PVI in experiments on correlation to Margin, while PVI outperforms PSI in experiments on misclassification detection, selective prediction, and calibration analysis?\\n\\nFor the correlation to margin experiment, the focus is on whether the model assigns higher confidence to samples with a larger margin (and vice versa), regardless of whether the prediction is correct. On the other hand, for the misclassification detection, selective prediction, and calibration analysis, the focus is more on the correctness of predictions (directly linked to accuracy). The contrast lies in the interpretation of confidence: margin experiments treat confidence as a measure of sensitivity to decision boundaries, while the other tasks treat it as a measure of predictive reliability. Therefore, it\\u2019s possible for PSI to perform better in the margin-based task, while PVI performs better in the accuracy-based tasks.\"}", "{\"title\": \"General Response (Part 3)\", \"comment\": \"Here are the failure prediction results without temperature scaling for all approaches:\\n\\n**MLP, MNIST:**\\n\\n| | AUROC (x 10^2) | AUPR success (x 10^2) | AUPR failure (x 10^2) | AURC (x 10^3) |\\n| ----- | :---: | :---: | :---: | :---: |\\n| MSP | 95.11 (0.48) | 99.87 (0.02) | 41.60 (3.28) | 1.38 (0.05) |\\n| SM | 95.44 (0.66) | 99.88 (0.02) | 41.14 (4.06) | 1.27 (0.22) |\\n| ML | 95.17 (0.30) | 99.92 (0.01) | 33.33 (2.17) | 0.95 (0.07) |\\n| LM | **97.18 (0.20)** | **99.95 (0.00)** | 41.23 (4.06) | **0.59 (0.04)** |\\n| NE | **97.19 (0.19)** | **99.95 (0.00)** | 42.52 (1.97) | **0.59 (0.04)** |\\n| NG | 95.12 (0.48) | 99.87 (0.02) | 42.21 (2.21) | 1.38 (0.16) |\\n| PMI | **97.24 (0.18)** | **99.95 (0.00)** | 41.05 (2.78) | **0.57 (0.05)** |\\n| PSI | 96.61 (0.18) | 99.94 (0.00) | 35.53 (2.43) | 0.69 (0.04) |\\n| PVI | 95.85 (0.43) | 99.89 (0.01) | **50.96 (3.50)** | 1.19 (0.14) |\\n| IR | 50.00 (0.00) | 98.47 (0.03) | 1.53 (0.03) | 15.26 (0.34) |\\n\\n**CNN, FMNIST**\\n\\n| | AUROC (x 10^2) | AUPR success (x 10^2) | AUPR failure (x 10^2) | AURC (x 10^3) |\\n| :---: | :---: | :---: | :---: | :---: |\\n| MSP | 92.02 (0.23) | 99.30 (0.03) | 42.76 (1.46) | 8.75 (0.34) |\\n| SM | 92.14 (0.20) | 99.33 (0.03) | 41.77 (1.47) | 8.50 (0.30) |\\n| ML | 87.42 (1.06) | 99.00 (0.06) | 32.17 (3.45) | 11.65 (0.50) |\\n| LM | 92.53 (0.20) | 99.44 (0.01) | 41.71 (1.50) | 7.53 (0.20) |\\n| NE | 92.60 (0.20) | 99.44 (0.01) | 44.11 (1.77) | 7.48 (0.19) |\\n| NG | 92.03 (0.22) | 99.30 (0.03) | 43.53 (1.67) | 8.74 (0.34) |\\n| PMI | 91.96 (0.35) | 99.37 (0.01) | 42.48 (2.39) | **8.11 (0.09)** |\\n| PSI | 89.42 (0.42) | 99.18 (0.02) | 33.39 (2.78) | 9.99 (0.26) |\\n| PVI | **92.74 (0.28)** | **99.36 (0.03)** | **51.10 (2.29)** | 8.24 (0.37) |\\n\\n**VGG16, STL-10:**\\n\\n| | AUROC (x 10^2) | AUPR success (x 10^2) | AUPR failure (x 10^2) | AURC (x 10^3) |\\n| :---: | :---: | :---: | :---: | :---: |\\n| MSP | **88.53 (0.67)** | **98.01 (0.16)** | 49.94 (2.21) | **27.02 (1.56)** |\\n| SM | **88.50 (0.67)** | **98.03 (0.15)** | 49.18 (2.03) | **26.87 (1.54)** |\\n| ML | 85.79 (0.74) | 97.47 (0.17) | 46.67 (1.75) | 31.92 (1.65) |\\n| LM | **88.47 (0.65)** | **98.07 (0.13)** | 48.87 (2.01) | **26.56 (1.30)** |\\n| NE | **88.69 (0.65)** | **98.10 (0.13)** | 50.94 (2.25) | **26.26 (1.30)** |\\n| NG | **88.55 (0.67)** | **98.02 (0.16)** | 50.43 (2.26) | **27.00 (1.56)** |\\n| PMI | 88.04 (0.57) | **97.96 (0.13)** | 48.27 (1.78) | **27.56 (1.27)** |\\n| PSI | 88.05 (0.56) | 97.94 (0.13) | 49.28 (1.84) | **27.71 (1.32)** |\\n| PVI | **89.09 (0.68)** | **98.10 (0.14)** | **53.14 (2.55)** | **26.25 (1.21)** |\\n\\n**Resnet-50, CIFAR-10:**\\n\\n| | AUROC (x 10^2) | AUPR success (x 10^2) | AUPR failure (x 10^2) | AURC (x 10^3) |\\n| :---: | :---: | :---: | :---: | :---: |\\n| MSP | **85.07 (0.40)** | **96.71 (0.09)** | 47.92 (1.96) | **38.95 (1.04)** |\\n| SM | **85.14 (0.39)** | **96.75 (0.07)** | 47.20 (1.85) | **38.64 (0.99)** |\\n| ML | 79.21 (1.06) | 95.04 (0.35) | 41.62 (2.11) | 54.08 (3.33) |\\n| LM | **85.24 (0.38)** | **96.80 (0.09)** | 47.03 (1.84) | **38.28 (1.18)** |\\n| NE | **85.06 (0.40)** | **96.72 (0.10)** | 48.50 (1.85) | **39.00 (1.23)** |\\n| NG | **85.08 (0.40)** | **96.71 (0.09)** | 48.22 (1.90) | **38.94 (1.04)** |\\n| PMI | 83.25 (0.53) | 96.13 (0.12) | 45.44 (2.05) | 44.25 (1.41) |\\n| PSI | 84.43 (0.53) | **96.71 (0.14)** | 46.24 (1.61) | 39.06 (1.46) |\\n| PVI | 86.49 (1.03) | 96.95 (0.31) | **56.02 (3.27)** | **36.77 (2.81)** |\\n\\n**Fine-Tuned Resnet101, ImageNet (Downsampled)**\\n\\n| | AUROC (x 10^2) | AUPR success (x 10^2) | AUPR failure (x 10^2) | AURC (x 10^3) |\\n| ----- | ----- | ----- | ----- | ----- |\\n| MSP | 78.58 (0.21) | 46.91 (0.28) | 94.90 (0.06) | 707.45 (1.21) |\\n| PVI | **80.11 (0.18)** | **49.04 (0.17)** | **95.33 (0.07)** | **700.14 (1.11)** |\\n| SM | 76.50 (0.27) | 45.71 (0.25) | 93.92 (0.13) | 713.33 (0.89) |\\n| ML | 50.11 (0.63) | 15.38 (0.18) | 85.21 (0.35) | 849.12 (1.35) |\\n| LM | 71.35 (0.33) | 40.68 (0.25) | 92.14 (0.18) | 733.72 (0.88) |\\n| NE | 76.37 (0.28) | 43.99 (0.32) | 94.24 (0.09) | 717.85 (1.18) |\\n| NG | 77.90 (0.24) | 46.27 (0.30) | 94.68 (0.06) | 710.04 (1.20) |\\n\\n**Densenet121, Tiny-ImageNet**\\n\\n| | AUROC (x 10^2) | AUPR success (x 10^2) | AUPR failure (x 10^2) | AURC (x 10^3) |\\n| ----- | ----- | ----- | ----- | ----- |\\n| MSP | 83.12 (0.73) | 83.29 (0.88) | 81.47 (0.91) | 263.59 (7.36) |\\n| PVI | **86.30 (0.62)** | **86.51 (0.51)** | **84.78 (1.08)** | **244.35 (4.40)** |\\n| SM | 82.75 (0.75) | 83.22 (0.87) | 80.12 (0.99) | 264.47 (7.34) |\\n| ML | 83.16 (0.57) | 83.27 (0.58) | 82.19 (0.70) | 263.57 (6.11) |\\n| LM | 82.41 (0.77) | 83.78 (0.80) | 79.31 (1.02) | 262.31 (6.98) |\\n| NE | 83.61 (0.69) | 84.35 (0.78) | 82.29 (0.79) | 257.91 (6.94) |\\n| NG | 83.25 (0.72) | 83.34 (0.87) | 81.81 (0.87) | 263.18 (7.36) |\\n| IR | 83.03 (0.74) | 82.42 (0.91) | 81.01 (0.92) | 265.96 (7.57) |\\n\\nNote that IR is short for Isotonic Regression.\"}", "{\"title\": \"Response to Reviewer HF1k (Part 4)\", \"comment\": \"> By the invariance properties in sec 3.1 we have that PMI is best, by the geometric and convergence properties in sec 3.2. and 3.3 we have that PSI is best, but in the performed experiments we find that PVI is best. How can you reconcile this and claim that theory and experiments are in line with each other?\\n\\n**Invariance Properties**: Yes, PMI is best in terms of being the most invariant to transformations, however this may not be a boon but rather a bane in the context of the uncertainty quantification problem. We have noted this in Remark 11 (Page 24) of the paper. Our argument can be summarized as follows. Let us consider a PI measure between a neural network layer $T$ and the output labels $Y$, and assume that $T\\u2019$ denotes another instance of the layer output $T$ which has the same information but arises from a different initialization of the network. If the relationship between $T$ and $T\\u2019$ is linear and invertible, then the invariance is helpful, as the network weights can adjust to preserve the network function, and thereby the degree of confidence. However, invariance to non-linear invertible and continuous transformations (for pointwise measures) also implies that the estimated confidence measure remains unchanged when $T\\u2019$ is related to $T$ in a non-linear manner. If the function is highly non-linear, then the estimated label for $T\\u2019$ could very likely end up having a different level of confidence compared to $T$, as the neural network\\u2019s weights are limited in the ways it can change to preserve the network function. Therefore, invariance to any homeomorphic transformation could be counterproductive.\\n\\n**Geometric Properties**: For geometric dependence properties, we indeed find empirically that PSI is the most correlated to margin (Table 1), reflecting the main theoretical takeaway. We note that for the correlation to margin experiment, the focus is on whether the model assigns higher confidence to samples with a larger margin (and vice versa), regardless of whether the prediction is correct. On the other hand, for the misclassification detection, selective prediction, and calibration analysis, the focus is more on the correctness of predictions (directly linked to accuracy). The contrast lies in the interpretation of confidence: margin experiments treat confidence as a measure of sensitivity to decision boundaries, while the other tasks treat it as a measure of predictive reliability. Therefore, it\\u2019s possible for PSI to perform better in the margin-based task and be more margin sensitive, while PVI performs better in the accuracy-based tasks.\\n\\n**Convergence Properties**: For convergence properties, we have since found a more accurate way of interpreting and comparing the convergence results for PMI, PVI and PSI. For PMI and PSI, in our initial summary of the bounds, we ignored the impact of the denominator term in the bound (Eqs. 13 and 14). However, this term can drastically impact the convergence rate of PMI and PSI. This yields potentially fragile convergence rates for PMI and PSI, depending on the nature of the joint distribution P(x,y). For PMI, the denominator in the convergence rates depends on the minimum of $P(x)$ and $P(x|y)$, thus when either $P(x)$ or $P(x|y)$ is low, it can potentially lead to significant error in estimation. In datasets where the class-wise distributions are more concentrated, probability density will be $P(x)$ will be higher on average, and also $P(x|y)$ will be high. This applies to simpler datasets such as MNIST where the class-wise distributions are fairly centered. For more complex datasets where the data is more spread out, and there is potentially more overlap between the classes, both $P(x)$ and $P(x|y)$ can be small more often, leading to worse convergence for PMI. The same argument can be applied to PSI, as it also has a minimum over the probabilities and conditional probabilities of the projections. This is also why we see PMI and PSI perform significantly better for MNIST relative to other measures compared to other complex datasets where they lag considerably. These observations then also lead to a re-interpretation of which measure may end up with the best convergence rates, as now it is clear that for more complex, spread out datasets, PMI and PSI\\u2019s convergence behaviour can potentially turn out worse than PVI. Based on the success of PVI\\u2019s empirical results, we then postulate that this may indeed be the case. \\n\\nWe are adding an experiment to showcase this disparity of convergence rates for simple versus complex datasets to support this hypothesis, and comparing between the three measures.\"}", "{\"title\": \"Response to Reviewer HF1k (Part 5)\", \"comment\": \"> To re-iterate on the connection to uncertainty quantification: it is repeatedly stated that there is a high relevance for \\u201cmodel uncertainty\\u201d, which equates to a notion of epistemic uncertainty. But then, Remark 1 motivates that uncertainty should be invariant to data transformations, which now relates to notions of data (aleatoric) uncertainty. Yet overall, the quantity of interest is in fact $p(y|x)$ which is simply predictive uncertainty of the model given an input. So, it does not seem to me like there is a principled association between the information measures and actual notions of uncertainty, and the authors are not clear about what kind of uncertainties we are trying to address. Overall, the connections to uncertainty are mainly contained in the motivating introduction, and in Remark 1 and Remark 2, and are all very high-level and speculative.\\n\\nFor remark 1, we did not intend to consider aleatoric uncertainty when talking about invariance to data transformations, but rather predictive uncertainty (how uncertain a model should be about a prediction $\\\\hat{y}$, given the input $x$). In that context, we argue that the predictive uncertainty should not change when the data (and the distribution) undergoes certain simple bijective transformations (such as scaling, rotation, invertible matrix multiplication), and as mentioned in Remark 11 (and in the response to the previous point), we do argue that being invariant to the large class of homeomorphic transformations may be counter-productive. This is because simple linear matrix-based bijective transformations on the features can be countered directly by a set of equivalent weights (by multiplying the weights with the matrix inverse) that preserve the model\\u2019s minimum loss, and by doing so, yields a model that will have similar predictive confidence due to the final function being the same. In contrast, non-linear transformations are thus not guaranteed to still preserve the model\\u2019s function that yields the minimum loss, especially for neural networks, as a direct transformation of the weights that preserves the original network function\\u00a0may\\u00a0not\\u00a0exist.\\n\\nWe will revise the terminology everywhere appropriate to explicitly refer to predictive uncertainty instead of the broader term \\\"model uncertainty.\\\" Predictive uncertainty aligns directly with the quantity of interest, p(y\\u2223x), which our proposed measures aim to quantify. While Remark 1 touches on invariance to data transformations, our primary focus is always on predictive uncertainty as a whole. We agree that this could have been better articulated, and we will adjust the wording to improve clarity.\\n\\n> Regarding Remark 1: the quantity of interest is $p(y|x)$, whereas data transformations are applied to features X. Since we then have that $g(X) \\\\neq X$, I don't necessarily see an issue if $p(y|g(X)) \\\\neq p(y|X)$ because we are conditioning on a different quantity.\\n\\nWhen looking at the invariance properties of the PI measures, we account for transformations $g(X)$ that affect not only the features, but we also consider the shift in the distribution p(X) as a result of those transformations. As an example, the observed pairs $(x_i,y_i)$ then get transformed to $(g(x_i),y_i)$. This is also why all of our invariance results are of the form $pi_P(x,y)= pi_{\\\\mathcal{T}P}(\\\\mathcal{T}x,y)$, where $\\\\mathcal{T}$ is the transformation, and $\\\\mathcal{T}x$ is the transformation applied to $x$, and $\\\\mathcal{T}P$ indicates the transformed distribution as a result of $x$ changing to $\\\\mathcal{T}x$. Thus, fundamentally we will have $P(y|X)= \\\\mathcal{T}P(y|g(X))$, because of the bijection between $X$ and $g(X)$. Thus, naturally, we want our measures to be invariant to\\u00a0these\\u00a0shifts.\"}", "{\"comment\": \"Dear Authors,\\n\\nThanks for providing a well-documented clarification regarding some of my concerns and questions in the paper. I'll wait for the results on larger datasets before making an informed decision concerning the paper, since as of the time I provided this response, I have yet to see relevant results on large datasets. For now, I'll keep the same rating for this submission as to the original rating.\\n\\nWarm regards,\\n\\nReviewer nRyg\"}", "{\"title\": \"Response to Reviewer ixZH\", \"comment\": \"Thank you for the comment. We will address the two concerns that the reviewer has below.\\n\\n> The proposed estimators are dependent on the datasets and their class-wise distribution, which is a significant limitation for their use as confidence estimators or model uncertainty estimates.\\n\\nThe proposed estimators, just like the prediction models themselves, do depend on the dataset and therefore by extension their class-wise distributions. However, once trained, the pointwise measures are standalone functions which do not require access to the data. We summarize for each measure as follows. For PVI, once the PVI network has been trained from scratch, we can directly use the PVI network for estimating PVI, without requiring continued access to data. For PSI, once the Gaussian distribution statistics for every projection has been stored in memory, we have a PSI estimator that works standalone without requiring continued access to data as well. Lastly for PMI, we use another neural network for estimating the density ratios, which, once trained, can be used directly for PMI estimation without continued access to data. So our pointwise estimators, just like the models, need a training process to learn, and from an inference time standpoint, the added time is negligible. Only PSI is relatively slower to compute, as it requires computing log likelihoods across sufficient number of projections. Critically, our measures do not interfere with the original model weights, or the training process, and therefore categorizes as post-hoc according to the definition in Cattelan & Silva (2024). \\n\\n> The improvement in the reported results heavily relies on Temperature Scaling [Guo et al. 2017]. This is evident in Table 12 in the Appendix, where the results are inferior to baseline softmax measures without temperature scaling.\\n\\nThere happens to be a misunderstanding, the PVI results are actually superior to baseline softmax measures without temperature scaling. In Table 12 of the Appendix, the results without temperature scaling are as follows (note that softmax scaling is not temperature scaling, just the softmax operator, calibration refers to temperature scaling):\\n\\n| Method & Dataset | Softmax (uncalibrated baseline) | PVI (uncalibrated, with softmax scaling)\\n|-|:-:|:-:|\\n| MLP, MNIST (AUROC$_f \\\\times 10^2) \\\\uparrow$ | 95.11 $\\\\pm$ 0.48 | **95.85 $\\\\pm$ 0.43** |\\n| MLP, MNIST (AURC$_f \\\\times 10^3) \\\\downarrow$ | 1.38 $\\\\pm$ 0.16 | **1.19 $\\\\pm$ 0.14** |\\n| CNN, F-MNIST (AUROC$_f \\\\times 10^2) \\\\uparrow$ | 92.03 $\\\\pm$ 0.23 | **92.75 $\\\\pm$ 0.28** |\\n| CNN, F-MNIST (AURC$_f \\\\times 10^3) \\\\downarrow$ | 8.75 $\\\\pm$ 0.34 | **8.24 $\\\\pm$ 0.37** |\\n\\nWe believe that this improvement is significant, especially considering that baseline softmax is widely recognized as a tough baseline to surpass, as evidenced in previous studies (Cattelan & Silva, 2024, Zhu et al., 2022).\\n\\n**References**:\\n\\nCattelan, L. F. P., & Silva, D. (2024). How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks. The 40th Conference on Uncertainty in Artificial Intelligence.\\n\\nZhu, F., Cheng, Z., Zhang, X.-Y., & Liu, C.-L. (2022). Rethinking Confidence Calibration for Failure Prediction. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 518\\u2013536).\"}", "{\"title\": \"General Response (Part 2)\", \"comment\": \"We conducted the following additional experiments after the discussion with the reviewers:\\n\\n1. **Failure Detection (without temperature scaling):** We perform experiments in the same setting as Table 2 of our paper, measuring various aspects of failure detection, except we do not perform temperature scaling based calibration for any of the measures **\\\\[ixZH, HF1k\\\\]**. \\n2. **Calibration Performance and Sharpness (without temperature scaling):** We perform experiments in the same setting as Table 3 of our paper, where we report the calibration error, except we do not perform temperature scaling based calibration for any of the measures **\\\\[ixZH, yJeK, HF1k\\\\]**. We also include the sharpness measure for all methods and add another measure for evaluating the calibration error, called negative log-likelihood (NLL) **\\\\[yJeK\\\\]**. \\n3. **Adding large scale datasets and deeper models:** We extend our experiments to the full Imagenet dataset (64x64 downsampled) which consists of 1000 classes and close to 1 million training examples with Resnet-101, and the Tiny-Imagenet dataset (64x64 images) which has 200 classes with Densenet-121 **\\\\[nRyg, 9w5v\\\\]**. We test our measures for failure detection and confidence calibration. Note that for Imagenet and Tiny-Imagenet, we do not provide the PMI and PSI results, as it takes a significant computational load. We had earlier mentioned this in our discussions with the reviewers **\\\\[nRyg,9w5v\\\\]**.\\n \\n\\nApart from these additions, we also incorporate the results of another method requested by one of the reviewers, namely isotonic regression, for both failure prediction and confidence calibration on MNIST and Tiny-Imagenet **\\\\[ixZH, 9w5v, HF1k\\\\]**. We will be adding the results for isotonic regression for all datasets and models in the final version of the paper, we only include these in this response as the experiments are still ongoing.\", \"we_summarize_the_observations_from_our_results\": \"1. For failure prediction, we find that even without temperature scaling, in most cases PVI showcases the best performance overall, and the improvement in performance is most significant for the AUPR (failure) metric, which is related to the proportion of detected incorrect predictions. On downsampled-Imagenet, PVI shows the most significant performance in terms of all four failure prediction measures. Our result for IR (Isotonic Regression) shows that it does not perform well for failure prediction task, agreeing with Zhu et al., (2022) which shows that many confidence calibration methods are useless or harmful for failure prediction.\\n2. For confidence calibration, an interesting observation we have is that, without temperature scaling, PMI performs significantly better than other measures in most cases, including PVI. Specifically, we find that PMI's calibration error without temperature scaling in most cases is overall the best, when considering both with and without temperature scaling scenarios. Interestingly, we also see that particularly for more complex datasets, PSI (without temperature scaling) performs very well and often has the best performance across both scenarios. Having said that, PVI still performs better than softmax and softmax margin overall, when avoiding temperature scaling. \\n3. For confidence calibration, we also find that the sharpness of our measures is acceptably low, except for downsampled-Imagenet, where the sharpness of all measures (including softmax) are high, mainly due to the low accuracy of the classifier.\\n4. For large-scale datasets, we find that PVI still has the best performance for both failure prediction and confidence calibration tasks, with or without temperature scaling.\", \"reference\": \"Zhu, F., Cheng, Z., Zhang, X.-Y., & Liu, C.-L. (2022). Rethinking Confidence Calibration for Failure Prediction. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 518\\u2013536).\"}", "{\"title\": \"Response to Reviewer HF1k (Part 9)\", \"comment\": \"> I am struggling to see the real novelty of the paper. The proposed information measures and their estimation methods are all taken from existing papers, and many of the theoretical results rely strongly on these papers as well. Is the theoretical analysis novel? Personally, it is hard for me to say since I am unfamiliar with this research domain. Is the novelty then in its application/use for confidence estimation? The experiments are unconvincing, and the connections to uncertainty do not go beyond some high-level arguments.\\n\\nYes, the proposed measures have been taken from existing papers, but two of the three measures are quite recent and through our work we have rigorously shown that these measures can definitely be applied to uncertainty quantification. The theoretical analysis, being novel, definitely relies on existing papers, but the resulting perspectives on the problem of uncertainty quantification and confidence estimation are new, which we hope provides new theoretical tools in this domain to analyse and explore upcoming measures. The connections to uncertainty were rigorously validated in our experiments, and empirical results have repeatedly demonstrated the improvements from some of these measures, mainly PVI. Furthermore, the discrepancy of the confidence calibration results with the opposite observation in the margin correlation experiments is also a novel finding, which may indicate that margin sensitivity may not be a fruitful direction for theoretical treatment of upcoming confidence measures. Precise care was also taken in all our experiments to show different evaluation aspects to the confidence estimation problem, going beyond just calibration error based evaluation which is often the norm. We will make sure that all these points are clearly and accurately conveyed in our paper.\\n\\n\\n> In addition, their theoretical insights and empirical results seem somewhat contradictory on what the best information measure is supposed to be. For example, they conclude that \\\"This superior performance is likely due to PVI being the most well-rounded metric, particularly in terms of its invariance and margin sensitivity\\u201d, even though Remarks 1, 2 and 3 on these properties rank PVI lowly.\\n\\nThe reviewer has mentioned these points before, and we have addressed them accordingly (please refer to Response Part 4).\"}", "{\"title\": \"Response to Reviewer nRyg\", \"comment\": \"We are grateful to the reviewer for their positive remarks and valuable suggestion on the paper. Below, we address the concerns and questions raised by the reviewer.\\n\\n> Is it possible to scale up the dataset (TinyImageNet, ImageNet) and architecture (ViT, DeIT), just like the experiments conducted by Jaeger et al., 2023 (https://arxiv.org/abs/2211.15259)?\\n\\nWe are currently conducting experiments to scale up the dataset and architecture to provide a more comprehensive comparison of PVI with other benchmark methods. However, scaling up PMI and PSI remains computationally expensive at this stage, as they need to be computed separately for each class due to normalization requirements. In contrast, PVI offers a significant advantage in scalability, as it naturally provides results for all classes within a single training process.\\n\\n> In Table 2, why do $n$ for PMI and PSI differ in value?\\n\\nThe difference in $n$ for PMI and PSI in Table 2 is mainly because PMI requires an exponential increase in the number of samples to achieve a comparable reduction in error when $d_x>1$. This is mainly because of the rate difference in the convergence bounds for PMI and PSI, when the dimensionality $d_x>1$ (In this case $d_x=3$), PMI\\u2019s convergence can get significantly slower (Proposition 6). Propositions 6 and 7 imply that for this case, for the PMI\\u2019s convergence rates to be the same as PSI\\u2019s with $n$ samples (assuming no changes in the order of magnitude in the denominator), one must have $n\\u2019 = n^{5/3}$ number of samples. In reality, the denominator for PMI (Eq. 13) will likely be smaller than PSI\\u2019s (Eq. 14) as the probability density for PMI is estimated in $d_x$ dimensions whereas the probability densities for PSI is estimated in one dimension. This further increases the number of samples required for PMI to yield similar convergence errors as PSI, which we indeed see in Table 2. \\n\\n> How would you correlate the invariance property, in which PMI theoretically has an edge, with the result obtained in Section 4 for failure prediction in Section 4.1 and confidence calibration in Section 4.2?\\n\\nYes, PMI is best in terms of being the most invariant to transformations, however this may rather be a disadvantage in the context of the uncertainty quantification problem. Remark 11 (Page 24) of our paper provides a more in-depth reasoning for why this may be the case. Our argument can be summarized as follows. Let us consider a PI measure between a neural network layer $T$ and the output labels $Y$, and assume that $T\\u2019$ denotes another instance of the layer output $T$ which has the same information but arises from a different initialization of the network. If the relationship between $T$ and $T\\u2019$ is linear and invertible, then the invariance is helpful, as the network weights can adjust to preserve the network function, and thereby the degree of confidence. However, invariance to non-linear invertible and continuous transformations (for pointwise measures) also implies that the estimated confidence measure remains unchanged when $T\\u2019$ is related to $T$ in a non-linear manner. If the function is highly non-linear, then the estimated label for $T\\u2019$ could very likely end up having a different level of confidence compared to $T$, as the neural network\\u2019s weights are limited in the ways it can change to preserve the network function. Therefore, invariance to any homeomorphic transformation could be counterproductive.\\n\\n> With regards to the Convergence Rate part detailed in Section 3.3, are there any possible ways to model $\\\\mathcal{V}$ for PVI in a way such that its estimation error are comparable to $|pmi(x;y)\\u2212\\\\hat{pmi}_n|$ as in PMI case?\\n\\nCurrently, due to the presence of some unknown constants in the convergence rates, we do not have a specific method to find an equivalent $\\\\mathcal{V}$ that yields the same rates. This is also because the nature of the convergence bound for PVI is probabilistic, whereas the other convergence bounds are absolute, which prevents us from directly choosing $\\\\mathcal{V}$ such that it results in the same convergence behaviour.\"}", "{\"title\": \"General Response (Part 5)\", \"comment\": \"Here are the confidence calibration results without and with temperature scaling for all approaches:\\n\\n**MLP, MNIST without Temperature Scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 1.20 (0.04) | 0.01 (0.00) | 0.102 (0.005) |\\n| PMI | **0.43 (0.12)** | 0.05 (0.01) | **0.054 (0.003)** |\\n| PSI | 5.12 (0.85) | 0.28 (0.03) | 0.103 (0.012) |\\n| PVI | 1.12 (0.05) | 0.01 (0.00) | 0.102 (0.005) |\\n| SM | 1.13 (0.05) | \\\\- | \\\\- |\\n\\n**MLP, MNIST with Temperature Scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 1.05 (0.07) | 0.01 (0.00) | 0.080 (0.007) |\\n| PMI | 1.31 (0.05) | 0.01 (0.00) | 0.107 (0.013) |\\n| PSI | 1.15 (0.39) | 0.07 (0.04) | **0.075 (0.014)** |\\n| PVI | **0.94 (0.05)** | 0.02 (0.01) | 0.068 (0.008) |\\n| SM | 0.72 (0.09) | \\\\- | \\\\- |\\n\\n\\n**CNN, Fashion MNIST without Temperature Scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 5.00 (0.16) | 0.04 (0.00) | 0.387 (0.003) |\\n| PMI | **1.87 (0.76)** | 0.15 (0.03) | **0.210 (0.012)** |\\n| PSI | 26.29 (1.06) | 0.94 (0.03) | 0.504 (0.019) |\\n| PVI | 4.72 (0.20) | 0.04 (0.00) | 0.387 (0.003) |\\n| SM | 4.58 (0.11) | \\\\- | \\\\- |\\n\\n**CNN, Fashion MNIST with Temperature Scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 3.02 (1.56) | 0.09 (0.05) | 0.259 (0.057) |\\n| PMI | 4.00 (1.02) | 0.07 (0.03) | 0.315 (0.082) |\\n| PSI | 4.22 (1.20) | 0.23 (0.07) | 0.249 (0.006) |\\n| PVI | **2.55 (0.66)** | 0.13 (0.04) | **0.213 (0.035)** |\\n| SM | 3.77 (0.38) | \\\\- | \\\\- |\\n\\n\\n**VGG16, STL10 without Temperature Scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 9.89 (0.32) | 0.09 (0.01) | 0.746 (0.049) |\\n| PMI | 5.76 (0.30) | 0.23 (0.01) | 0.460 (0.012) |\\n| PSI | **4.80 (0.54)** | 0.56 (0.02) | **0.423 (0.010)** |\\n| PVI | 8.87 (0.33) | 0.09 (0.01) | 0.746 (0.049) |\\n| SM | 8.67 (0.48) | \\\\- | \\\\- |\\n\\n**VGG16, STL10 with Temperature Scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 7.42 (2.69) | 0.20 (0.14) | **0.629 (0.201)** |\\n| PMI | 9.20 (3.36) | 0.12 (0.12) | 0.980 (0.287) |\\n| PSI | 7.75 (3.15) | 0.32 (0.23) | 0.638 (0.249) |\\n| PVI | **4.91 (2.29)** | 0.28 (0.12) | 0.496 (0.116) |\\n| SM | 8.33 (1.61) | \\\\- | \\\\- |\\n\\n**ResNet50, CIFAR10 without Temperature Scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 10.77 (0.42) | 0.08 (0.00) | 0.883 (0.025) |\\n| PMI | 8.61 (0.57) | 0.15 (0.01) | 0.626 (0.032) |\\n| PSI | **4.10 (1.44)** | 0.33 (0.05) | **0.503 (0.023)** |\\n| PVI | 9.75 (0.40) | 0.08 (0.01) | 0.932 (0.052) |\\n| SM | 10.77 (0.42) | \\\\- | \\\\- |\\n\\n**ResNet50, CIFAR10 with Temperature Scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 10.79 (0.54) | 0.08 (0.00) | 0.888 (0.050) |\\n| PMI | 12.25 (0.49) | 0.03 (0.01) | 1.479 (0.111) |\\n| PSI | 10.97 (1.45) | 0.07 (0.04) | 1.210 (0.361) |\\n| PVI | **9.59 (0.35)** | 0.09 (0.02) | **0.886 (0.054)** |\\n| SM | 9.83 (0.52) | \\\\- | \\\\- |\\n\\n**Fine-Tuned Resnet101, ImageNet (Downsampled) without temperature scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | **1.18 (0.05)** | 4.45 (0.01) | **4.656 (0.005)** |\\n| PVI | 4.03 (0.07) | 4.45 (0.01) | **4.653 (0.005)** |\\n| SM | 6.18 (0.16) | \\\\- | \\\\- |\\n\\n**Fine-Tuned Resnet101, ImageNet (Downsampled) with temperature scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 5.46 (0.41) | 4.01 (0.04) | 4.726 (0.008) |\\n| PVI | **4.25 (0.11)** | 4.51 (0.03) | **4.651 (0.005)** |\\n| SM | 4.85 (0.08) | \\\\- | \\\\- |\\n\\n**Densenet121, Tiny-ImageNet without temperature scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 35.99 (0.84) | 0.53 (0.01) | **4.268 (0.101)** |\\n| PVI | **17.25 (0.89)** | 0.54 (0.01) | 4.420 (0.166) |\\n| SM | 29.21 (0.90) | \\\\- | \\\\- |\\n| IR | 25.39 (0.77) | 1.13 (0.02) | 2.942 (0.035) |\\n\\n**Densenet121, Tiny-ImageNet with temperature scaling**\\n\\n| Method | ECE | Sharpness | NLL |\\n| ----- | ----- | ----- | ----- |\\n| MSP | 33.82 (1.89) | 0.63 (0.08) | 3.955 (0.243) |\\n| PVI | **16.24 (0.92)** | 0.75 (0.08) | **3.738 (0.274)** |\\n| SM | 26.97 (2.15) | \\\\- | \\\\- |\\n| IR | 25.39 (0.77) | 1.13 (0.02) | 2.942 (0.035) |\"}", "{\"summary\": \"Uncertainty estimation becomes essential for ensuring safety AI deployment. To estimate this, various measures, commonly based on softmax probs, are employed, but they are often poorly calibrated. The authors handle this issue by utilizing pointwise information measures - PMI, PSI, PVI. They analyze several properties of measures to validate its reliability, i.e., invariance and sensitivity to margin, and conduct empirical evaluations to support its effectiveness on several datasets. Experimental results provide some findings regarding the superiority and scalability among the measures.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"It provides theoretical analysis regarding the properties of several PI measures, which are supported by empirical observations.\", \"Effectiveness of the measures are validated via two types of experiments.\"], \"weaknesses\": [\"More comprehensive analysis with other uncertainty metrics are needed, such as MC Dropout, MCMC, or Laplace approximation. It also needs to compare with non-pointwise information measures, such as MINE [1].\", \"Empirical results are based only on small-scale datasets, such as MNIST or CIFAR-10, although this paper aims to address scalability.\", \"To better understanding, it would be helpful to add some visualizations, such as saliency map (with more curated examples as well as Fig.5), ROC curve, or ECE diagram.\", \"[1] Belghazi et al., Mutual Information Neural Estimation, ICML 2018\"], \"questions\": [\"In L500, why convergence rate is a crucial factor for confidence calibration?\", \"Why PVI is favorable for complicated dataset than other PI measures?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Follow-up Queries (Part 2)\", \"comment\": \">Yet, if I plan on using your approaches for uncertainty quantification then I would like to know what the take-away is. Do they shed any additional insights into what measure I might prefer for a given setting (e.g., if I have strongly overlapping class supports, or high correlation, etc.). It is good to state theoretical results, but it is better to explain how they are conducive to your stated goal of improving UQ. In that context, it is necessary to also relate them to each other.\\n\\nWe acknowledge this point from the reviewer, and have since updated the discussion in the paper, adding a new section on Theoretical takeaways, and detailing the concrete findings of our theoretical results. We also respond to the above query more specifically, below:\", \"our_theoretical_contributions_have_three_parts\": \"invariance behaviour, margin dependence, and convergence rates. Out of them, invariance behaviour is absolute and in this part we actually can compare measures directly, which we have done through our work. PMI is the most invariant, followed by PVI and then PSI.\\nOn margin dependence, although the comparison between PSI and PVI for instance is not immediately clear, there are outcomes to our results that are in fact absolute. For instance, we see PMI be invariant to hard margin, and yet PSI being sensitive to hard margin (setting epsilon to zero in Theorem 1 still yields a dependence on margin). Similarly, we also see PVI being dependent on hard margin. Comparing PSI and PVI in absolute terms is of course quite a challenge as PVI depends on the nature of the function class V, and the dependency can drastically vary between different Vs. However, we believe our PVI result efficiently integrates all of this added complexity of choice into a single Lipschitz constant term M, which then becomes the main decider for the degree of margin sensitivity. When the network function has a high Lipschitz constant, then it is less margin sensitive and vice versa. Therefore, smoother functions will have greater margin sensitivities. As M can be anything, we cannot in absolute terms compare margin sensitivities between PVI and PSI. Therefore, our current discussion around PVI's result is primarily around the behaviour of margin sensitivity when the margin increases, as the exponential dependence ensures that eventually the margin sensitivity will be negligible. \\nHaving said this, we did perform an experiment where we measure margin sensitivity via correlations and find that our theoretical observations are consistent and do not contradict our results.\\nOn convergence rates, our results do allow us to directly compare PMI and PSI convergence rates, as the exponent on n is different for PSI and PVI. This fact is also subsequently validated and verified by our empirical observations in Table 1. For PVI, again due to the presence of V, it is hard, if not impossible to state in absolute terms if PVI will be better or worse in terms of convergence behaviour. So, our results on PVI's convergence are mainly meant to be indicative of what factors can influence the convergence of a PVI estimator.\"}", "{\"title\": \"Response to Reviewer 9w5v (Part 1)\", \"comment\": \"We appreciate the reviewer for their positive remarks and valuable suggestions. Below, we address the concerns and questions that the reviewer has regarding our paper:\\n\\n> More comprehensive analysis with other uncertainty metrics are needed, such as MC Dropout, MCMC, or Laplace approximation.\\n\\nIt is important to note that, unlike the proposed pointwise information measures, the suggested benchmark methods inherently alter the model's predictions. Therefore, it would not be fair to compare these methods. That said, we will be incorporating additional benchmark methods, including those proposed in Cattelan & Silva (2024), into our analysis. We are currently conducting the experiment, and will be including the results shortly.\", \"reference\": \"Tsai, Y.-H. H., Zhao, H., Yamada, M., Morency, L.-P., & Salakhutdinov, R. (2020). Neural Methods for Point-wise Dependency Estimation. Advances in Neural Information Processing Systems, 33, 1\\u201312.\\n\\n> Empirical results are based only on small-scale datasets, such as MNIST or CIFAR-10, although this paper aims to address scalability.\\n\\nWe are currently conducting experiments on larger-scale datasets to provide a more comprehensive comparison of PVI with other benchmark methods. However, scaling up PMI and PSI remains computationally expensive at this stage, as they need to be computed separately for each class due to normalization requirements. In contrast, PVI offers a significant advantage in scalability, as it naturally provides results for all classes within a single training process.\\n\\n> It would be helpful to add some visualizations, such as saliency map (with more curated examples as well as Fig.5), ROC curve, or ECE diagram.\\n\\nThank you for the suggestion. We will be including all the suggested plots in the revised manuscript.\\n\\n> Why convergence rate is a crucial factor for confidence calibration?\\n\\nConvergence rates become important to the confidence calibration problem, as they indicate the level of noise in the estimated confidence measures. One way to visualize, is to consider multiple datasets of the same sample complexity sampled from the same distribution, and in each case consider the estimated pointwise measure on a fixed test sample $(x,y)$. The estimated measure will be the ground truth PI value for $(x,y)$ plus an additional noise term, which will vary when a different input dataset sample is chosen. The degree of variability of this noise is essentially reflected in the convergence rate. This added noise will only affect the confidence calibration negatively, as for measures which are more noisy, the noise can significantly affect the confidence measures and subsequently decrease the calibration performance due to the same. When we have access to infinite data samples, the convergence rates become irrelevant to the problem. Furthermore, as convergence rates are usually inversely proportional to dimensionality, the impact of convergence rates on the confidence calibration results will be felt significantly more in the low to moderate data regime and the high dimensional regime.\"}", "{\"title\": \"General Response (Part 4)\", \"comment\": \"Here are the failure prediction results with temperature scaling for all approaches for large-scale datasets:\\n\\n**Fine-Tuned Resnet101, ImageNet (Downsampled)**\\n\\n| | AUROC (x 10^2) | AUPR success (x 10^2) | AUPR failure (x 10^2) | AURC (x 10^3) |\\n| ----- | ----- | ----- | ----- | ----- |\\n| MSP | 78.64 (0.20) | 47.02 (0.26) | 94.91 (0.06) | 707.10 (1.20) |\\n| PVI | **80.13 (0.19)** | **49.03 (0.17)** | **95.34 (0.08)** | **700.12 (1.09)** |\\n| SM | 76.29 (0.27) | 45.46 (0.26) | 93.86 (0.14) | 714.28 (0.83) |\\n| ML | 50.11 (0.63) | 15.38 (0.18) | 85.21 (0.35) | 849.12 (1.35) |\\n| LM | 71.35 (0.33) | 40.68 (0.25) | 92.14 (0.18) | 733.72 (0.88) |\\n| NE | 77.20 (0.21) | 45.45 (0.25) | 94.46 (0.07) | 713.11 (1.28) |\\n| NG | 78.32 (0.19) | 46.84 (0.26) | 94.80 (0.06) | 708.04 (1.26) |\\n\\n**Densenet121, Tiny-ImageNet**\\n\\n| | AUROC (x 10^2) | AUPR success (x 10^2) | AUPR failure (x 10^2) | AURC (x 10^3) |\\n| ----- | ----- | ----- | ----- | ----- |\\n| MSP | 83.28 (0.78) | 83.65 (0.96) | 81.71 (1.01) | 261.66 (7.62) |\\n| PVI | **86.52 (0.57)** | **86.93 (0.51)** | **85.03 (1.04)** | **242.07 (4.44)** |\\n| SM | 82.77 (0.75) | 83.28 (0.87) | 80.15 (0.99) | 264.12 (7.22) |\\n| ML | 83.16 (0.57) | 83.27 (0.58) | 82.19 (0.70) | 263.57 (6.11) |\\n| LM | 82.41 (0.77) | 83.78 (0.80) | 79.31 (1.02) | 262.31 (6.98) |\\n| NE | 83.71 (0.72) | 84.41 (0.79) | 82.35 (0.81) | 257.49 (6.95) |\\n| NG | 83.41 (0.77) | 83.70 (0.95) | 82.01 (0.92) | 261.24 (7.61) |\\n| IR | 83.03 (0.74) | 82.42 (0.91) | 81.01 (0.92) | 265.96 (7.57) |\"}", "{\"summary\": \"This paper investigates the use of information-theoretic measures for confidence estimation in deep neural networks in a post-hoc manner. It specifically compares three measures from the prior works: pointwise mutual information (PMI), pointwise $\\\\mathcal{V}$-information (PVI), and pointwise sliced mutual information (PSI), on their effectiveness as tools for confidence estimation. The study examines the theoretical properties of these measures in terms of invariance, correlation with margin, and convergence rate. Empirical evaluations are conducted on tasks such as misclassification detection, selective prediction, and calibration error analysis, using image classification tasks. These evaluations compare the three measures against baseline methods including softmax, margin, max logit, and negative entropy. The results indicate that PVI outperforms both PMI and PSI in terms of effectiveness as a confidence estimation tool.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper provides a comprehensive theoretical and empirical analysis of three pointwise information measures (PMI, PVI, and PSI) on their effectiveness to be used as confidence estimation tools. It demonstrates that these measures can be applied in a post-hoc manner and do not require modifying the model architecture or retraining the network.\", \"Empirical evaluation covers broader scope including misclassification detection, selective prediction and calibration error analysis. These aspects are crucial for thoroughly analyzing the reliability of the confidence measures in supporting model predictions.\"], \"weaknesses\": \"* Despite the use of pointwise information measures requiring the training of additional models in a post-hoc manner, both PMI and PSI perform worse than the baseline softmax measure, as evidenced by the results in Table 3 and Table 4. Additionally, the PVI estimator employs temperature scaling, a post-hoc confidence calibration method, which raises concerns about whether the improved performance of PVI is due to the PVI measure itself or the temperature scaling. The paper would benefit from further evaluation of additional benchmark methods to provide clarity on this issue, specifically: (1) PVI estimator without the temperature scaling, and (2) Softmax (SM) with temperature scaling [Guo et. al. 2017]\\n\\n* Given the focus and motivation of the paper on exploring post-hoc confidence estimation tools, the empirical evaluation does not include comparisons with established post-hoc confidence calibration methods. To address this gap, it would be beneficial to compare the proposed measures with well-known methods such as Temperature Scaling (TS) [Guo et. al. 2017] and Ensemble Temperature Scaling (ETS) [Zhang et. al. 2020].\\n\\n\\n[Guo et. al. 2017] Guo, Chuan, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. \\\"On calibration of modern neural networks.\\\" In International conference on machine learning, pp. 1321-1330. PMLR, 2017.\\n\\n[Zhang et. al. 2020] Zhang, Jize, Bhavya Kailkhura, and T. Yong-Jin Han. \\\"Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning.\\\" In International conference on machine learning, pp. 11117-11128. PMLR, 2020.\", \"questions\": [\"Can you please address the concerns about whether the improved performance of PVI is due to the PVI measure itself or the temperature scaling in the PVI estimator?\", \"What is the reason for the contradictory findings where PSI is shown to be a better confidence estimator than PVI in experiments on correlation to Margin, while PVI outperforms PSI in experiments on misclassification detection, selective prediction, and calibration analysis?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for the detailed responses. Some of my questions are addressed with specific assumptions, and others may need further experimental clarification. I hope the authors can add the assumptions to their limitations, and new results, and explain them in more detail in the next version.\"}", "{\"title\": \"Response to Reviewer 9w5v (Part 2)\", \"comment\": \"> Why PVI is favorable for complicated dataset than other PI measures?\\n\\nThere are a few different factors that help contextualize this observation. First, it is because PMI and PSI\\u2019s convergence rates can suffer significantly for complex datasets where the density values $P(x)$ are more spread (and thus lower on average) and there is more overlap between class-wise distributions (thus $P(x|y)$ is also lower). These factors significantly affect the precision of the PMI estimator, as its estimation error can significantly increase when the density $P(x)$ or $P(x|y)$ is high (Eq. 13). The same also applies in PSI\\u2019s case, as its estimation error also inversely depends on the probabilities of the projected random variables $P(\\\\theta^Tx)$ and $P(\\\\theta^Tx|y)$, and with more spread $P(x)$ and larger class-wise overlap, the projection probabilities will also be impacted negatively on average, resulting in slower convergence and greater estimation noise. \\n\\nNext, from the perspective of invariance properties, as shown in Remark 11 (Page 24) of the paper, we argue that invariance to the much broader class of transformations is not necessarily desirable in the context of confidence estimation, which is a point in favor of PSI and PVI. However, since PSI is not invariant to simple invertible matrix transformations but PVI is, this makes PVI the most desirable in terms of invariance. \\n\\nLastly, from the perspective of margin sensitivity, we see PMI being insensitive to hard margin, which can be a detriment to the confidence estimation problem when the training distributions are well separated because of overfitting but the test distributions are overlapping. This phenomenon is more likely to occur for complex datasets, where the network overfits by minimizing the cross-entropy loss to near zero values and creating separated feature distributions, but the overlap in the distributions turns out to be much greater when considering the test set. Although PSI turns out to be most sensitive to margin, PVI is also margin sensitive in the hard-margin case, which therefore is a boon in the context of more complex datasets. \\n\\nAll of these properties combined hint that PVI seems to be the most well-rounded of measures, that likely scales better to more high dimensional and complex datasets.\"}", "{\"title\": \"Response to Reviewer HF1k (Part 2)\", \"comment\": \"> The proposed measures are motivated by their \\u201cDirect Probability Computation\\u201d, but given their value ranges the only way to obtain probabilities is to pass them through a squashing function such as softmax, which is precisely done in the experiments. So how does the interpretation of obtained probabilities, and their associated reliability, differ in any way from just a regular softmax on logits?\\n\\nThe squashing function (softmax) is applied to re-normalize the PI measures for interpretability as probabilities. Unlike regular logits, the PI measures are derived from a direct probability computation framework, which ensures they inherently reflect pointwise relationships in the data. Logits, on the other hand, can be seen as relative scores that the model assigns to each class, and they lack probabilistic grounding because they do not directly represent likelihoods or probabilities in the mathematical sense. While we will revise the motivational phrasing for clarity, the core idea remains that these measures fundamentally work with probabilities rather than arbitrary scores. The additional squashing function (like softmax) primarily adds another layer of normalization for each example, which we find to be additionally helpful. We have argued in our work that the observation that additional squashing such pointwise measures can yield better results is actually a novel finding, as only recently raw PVI was used for confidence estimation in the natural language context (Ethayarajh et al., 2022). In some ways, we have rigorously established here empirically that the best practices for incorporating pointwise information measures for uncertainty/confidence estimation involves additional normalization functions.\", \"reference\": \"Ethayarajh, K., Choi, Y., & Swayamdipta, S. (2022). Understanding Dataset Difficulty with V-Usable Information. In Proceedings of the 39th International Conference on Machine Learning (pp. 5988\\u20136008).\\n\\n> The associated claims on \\\"robustness\\\" are not examined or backed up in any way.\\n\\nWe acknowledge that the claim about \\\"robustness\\\" is not examined in this work, such as with respect to class imbalance, and will remove it to avoid empirically unsupported motivations. However, we hypothesize that the PI measures, being more grounded in probability, makes them potentially more robust in capturing uncertainty, which is only limited by the accuracy and robustness of the probability/density ratio estimator used. Due to this, when density ratio estimators improve for neural network feature-label joint distributions in the coming years, they can be substituted with our current empirical choices for more robust results, as fundamentally our ideas are grounded in probability. We leave a more thorough investigation of robustness for future work.\"}", "{\"title\": \"Response to Reviewer HF1k (Part 8)\", \"comment\": \"> Relatedly, the results in Table 3 are often within each other's margin of error, so there are some questions on the reliability or significance of \\u201coutperforming\\u201d.\\n\\nAfter considering the margin of error, while the performance improvement of PVI is less pronounced for AUROC_f and AUPR_{f, success}, it remains notably significant for AUPR_{f, failure} and AURC, which are the preferred metrics (Jaeger et al., 2023).\", \"reference\": \"Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017). On Calibration of Modern Neural Networks. In Proceedings of the 34th International Conference on Machine Learning (pp. 1321\\u20131330).\\n\\n> I am personally missing a more detailed and meaningful interpretation of the results beyond stating what can be seen in the provided results tables.\\n\\nWe are incorporating a set of theoretical takeaways with a detailed discussion of how each result relates to the theory and informs us of differences in terms of expectations. These are based primarily around the more refined interpretation of convergence rates discussed in our earlier responses, and the discussion around invariance to transformations.\"}", "{\"comment\": \"We thank the reviewer for their positive remarks and valuable suggestions. Below, we address the concerns and questions that the reviewer has regarding our paper:\\n\\n> The novelty regarding a proposed method is weak since PMI, PSI [1], and PSI [2] have been proposed before.\\n\\nWe agree that the proposed measures have been taken from existing papers, but two of the three measures are quite recent and through our work we have rigorously shown that these measures can definitely be applied to uncertainty quantification. The theoretical analysis, being novel, definitely relies on existing papers, but the resulting perspectives on the problem of uncertainty quantification and confidence estimation are new, which we hope provides new theoretical tools in this domain to analyse and explore upcoming measures. The connections to uncertainty were rigorously validated in our experiments, and empirical results have repeatedly demonstrated the improvements from some of these measures, mainly PVI. Furthermore, the discrepancy of the confidence calibration results with the opposite observation in the margin correlation experiments is also a novel finding, which may indicate that margin sensitivity may not be a fruitful direction for theoretical treatment of upcoming confidence measures. Precise care was also taken in all our experiments to show different evaluation aspects to the confidence estimation problem, going beyond just calibration error based evaluation, which is often the norm. We are currently conducting experiments to include more benchmark methods and large-scale data and architectures.\\n\\n> The novelty in theoretical analysis is quite weak. Specifically, the invariance properties have been mentioned in Section 3 of [1] and the convergence rate has been analyzed in Section 3 of [1] and Section 4 of [2].\\n\\nIf the reviewer is referring specifically to the invariance property of PSI being invariant to shift, scale and rotation (part of our Proposition 1), then we indeed note that such properties have been separately discussed in [1], but in the context of the aggregate measure SMI. In our work, we also highlight that these invariances hold for the pointwise version of the measure, which is PSI in our case. Note that the invariance of the aggregate measure will not directly imply the invariance of the pointwise version, but vice-versa. Apart from this, the invariance properties of PVI have not been discussed in literature and are thus novel, and the invariance properties of PMI, while being well-known separately, have not been compiled in this manner to the best of our knowledge. We still feel that it important to include the invariance properties of PMI along with PSI and PVI, which allows for immediate comparisons between classical measures such as PMI and more recent ones such as PVI and PSI. \\n\\nFor convergence rates, Section 3 of [1] discusses the convergence rates of the aggregate measure SMI. Our bounds are the pointwise version of these measures. We note that our results are not a direct corollary of SMI\\u2019s convergence rates, for instance to prove the convergence bounds for PSI, we have to start our proofs from applying the triangle inequality to psi itself, which is different from SMI\\u2019s case, which applies it to the aggregate SMI. Furthermore in arriving at the result, we also make use of our PMI\\u2019s convergence bound, which uses the result in Jiang et al., (2017) [4], but is also not a direct extension of their result. This indicates that our results are not a direct consequence of SMI\\u2019s bounds. Similarly, the PAC bounds in section 4 of [2] apply only to empirical $\\\\mathcal{V}$-Information, and cannot be directly extended to PVI\\u2019s case. As such, we find that we need to additionally incorporate the variance of the logits (the measure M) to obtain a convergence bound for PVI which also depends on the complexity of the predictive function class $\\\\mathcal{V}$. \\n\\n> Three PI measures are less computationally efficient than other baselines (e.g., standard Softmax) by requiring additional models and computes either $pmi(x;y)$, $psi(x;y)$, or $pvi(x\\u2192y)$.\\n\\nWe acknowledge that there is a trade-off between computational efficiency and the theoretical motivations behind the three PI measures. While these measures require additional models and computations, their added theoretical grounding provides deeper insights into the dependencies and uncertainties that standard methods like softmax may not capture. The improvement in performance achieved by PVI highlights its potential and could serve as motivation for future research aimed at developing more computationally efficient methods for its estimation.\", \"title\": \"Response to Reviewer yJeK (Part 1)\"}", "{\"title\": \"Response to Follow-up Queries (Part 1)\", \"comment\": \"We thank the reviewer for continuing to engage with our work. We have since incorporated most of the changes mentioned in our responses, some of which are clarity based, and others are based on providing better discussions, based on our responses to the reviewer. Our general response lists all the points that we have addressed. We have also now conducted additional experiments based on the discussions with the reviewers, the results of which are provided in the general response.\\n\\nApart from this, these are our responses to some of the specific queries the reviewer had for additional clarity:\\n\\n>Your measures are clearly unbounded as stated in the appendix, and if they do not satisfy Kolmogorov's axioms they are as little valid probabilities to me as any logit scores. The argument for \\\"direct probability\\\" seems more related to point-wise data effects, which is more related to influence functions. At the end of the day you still squash them to the desired \\n range and call them probabilities thereafter, just as logits do.\\n\\n\\nWe would like to provide a clarification to the probability based motivation, which has now been incorporated in our revised draft. Our measures are not supposed to follow Kolmogorov's axioms as they represent fundamentally a logarithm of density ratios, not actual probabilities. To estimate PMI for instance, we follow the benchmark approach for high dimensional distributions (from Tsai et al., 2020) which does not actually involve any intermediate probability estimation step, but directly estimation of the density ratios themselves. We think it is unreasonable to completely disqualify our measures from any discussion in the information-theoretic sense only because of an additional normalization step, which we also consider to be a contribution of our work as it yields the best practice in terms of results. Given that recently pointwise measures such as PVI have gained attention for confidence estimation, we definitely consider this finding relevant and useful for anyone using these pointwise measures for their models.\\n\\n>This seems in direct contradiction to your statements made in Remark 1 on this topic, where you state \\\"Note that overall we see that PMI is the most invariant in nature, followed by PVI and then PSI. Thus PMI is the most structure preserving in nature. This property is helpful in the context of uncertainty estimation\\\". Perhaps you should re-think Remarks 1 and 2 in light of your arguments.\\n\\nYes, this has been addressed in the revised draft.\"}", "{\"title\": \"Response to Reviewer HF1k (Part 6)\", \"comment\": \"> Regarding Remark 2: The provided interpretation for confidence estimation does not take into account data atypicality or OOD'ness [3]. These samples may lay far away from the decision boundary/margin but also in the tail of the data support, and thus should ideally exhibit low confidence. Also, the interpretation on confidence correlating with margin distance is only desirable for overlapping supports. If e.g. $P(X|Y=0)$ and $P(X|Y=1)$ are clearly separated (as e.g. used in Prop. 4) then we would desired maximum confidence everywhere. So, it is unclear to me how Remark 2 follows from the stated results and how directly applicable/useful these results are.\\n\\nYes, if $P(X|Y=0)$ and $P(X|Y=1)$ are clearly separated, we would indeed desire maximum confidence everywhere. But the fact that we do not know the ground truth distribution $P(X,y)$ implies that even when the estimate of $P$, denoted by $Q(X,y)$, from the training data, is perfectly separated, the separation of the true unknown $P(X,y)$ will be most likely smaller with potential overlap. This is because $Q(X,y)$ clearly has a significant chance of \\u2018overfitting\\u2019 the true distributions, as the objective of the classifier is always to separate the training feature distributions anyway. Due to this potential overestimation of the real margin, encoding additional geometric information about $Q(X,y)$, such as the hard margin involved in the perfect separation, can inform about the probability of $P(X|y=0)$ and $P(X|y=1)$ being perfectly separated as well. If $Q(X,y)$ has a very small hard margin, then it is possible that $P(X,y)$ ends up with overlapping class-wise feature distributions, and if it has a very large hard margin, then the opposite is likely. Of course, we are aware that there is no way to predict with certainty what will happen with the underlying $P$, but given that literature has found correlation between the hard margin between the class-wise feature distributions and generalization (Gr\\u00f8nlund et al., 2020), it felt pragmatic to prefer measures which can encode additional geometric information about the feature distributions.\", \"reference\": \"Ethayarajh, K., Choi, Y., & Swayamdipta, S. (2022). Understanding Dataset Difficulty with V-Usable Information. In Proceedings of the 39th International Conference on Machine Learning (pp. 5988\\u20136008).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer yJeK (Part 2)\", \"comment\": \"> The experimental results also lack some measurements such as sharpness and predictive entropy to assess the uncertainty quality performance. Could the authors please compare methods with the sharpness score [5] in Section 4.2?\\n\\nThank you for the suggestion. We are currently conducting experiments to include the suggested metrics, such as sharpness and predictive entropy, and will be including the results shortly.\\n\\n> What do the authors mean by a \\u201cpost-hoc manner\\u201d in L-45? Is this post-hoc recalibration technique with additional hold-out calibration data to fine-tune some learnable parameters?\\n\\nBy \\\"post-hoc manner,\\\" we mean that the method is applied after the model has been fully trained, without altering the training process or predictions of the model.\\n\\n> Remark 1 is vague to me. What kinds of uncertainty are you talking about (aleatoric or epistemic)? It would be great if the authors could explain this kind of uncertainty through the lens of IT (see Eq.1 in [3]).\\n\\nFor remark 1, we did not intend to consider aleatoric or epistemic uncertainty in particular when talking about invariance to data transformations, but rather the total predictive uncertainty (how uncertain a model should be about a prediction $\\\\hat{y}$, given the input $x$). Contextualizing this using the terminology in Eq. 1 of [3], we can write: $H[Y|x,D] = H[Y|\\\\mathcal{T}x,\\\\mathcal{T}D]$, where $H[Y|\\\\mathcal{T}x,\\\\mathcal{T}D]$ denotes the conditional entropy of the output labels given the transformed input datapoint $\\\\mathcal{T}x$ and also the transformed dataset $\\\\mathcal{T}D = \\\\{(\\\\mathcal{T}x_1,y_1),...,(\\\\mathcal{T}x_n,y_n)\\\\}$, which implies that the underlying distribution has been transformed as well. The ideal scenario is when the above is true for any invertible, and thus information-preserving transformation $\\\\mathcal{T}$, however, as we cannot ignore the constraints of the model involved in the decision-making process, we restrict the desirable $\\\\mathcal{T}$ to the set of invertible linear transformations on $x$. We argue that being invariant to the large class of homeomorphic transformations may be counter-productive (Remark 11, page 24 of our paper). Our argument can be summarized as follows. Let us consider a PI measure between a neural network layer $T$ and the output labels $Y$, and assume that $T\\u2019$ denotes another instance of the layer output $T$ which has the same information but arises from a different initialization of the network. If the relationship between $T$ and $T\\u2019$ is linear and invertible, then the invariance is helpful, as the network weights can adjust to preserve the network function, and thereby the degree of confidence. However, invariance to non-linear invertible and continuous transformations (for pointwise measures) also implies that the estimated confidence measure remains unchanged when $T\\u2019$ is related to $T$ in a non-linear manner. If the function is highly non-linear, then the estimated label for $T\\u2019$ could very likely end up having a different level of confidence compared to $T$, as the neural network\\u2019s weights are limited in the ways it can change to preserve the network function. Therefore, invariance to any homeomorphic transformation could be counterproductive. We will refine Remark 1 to incorporate these points and discuss in the context of predictive uncertainty. \\n\\n> Why when a classifier is uncertain on $X$, the uncertainty about $g(X)$ should ideally be the same? Can you formally explain this argument and give some examples about this?\\n\\nWhen looking at the invariance properties of the PI measures, we account for transformations $g(X)$ that affect not only the features, but we also consider the shift in the distribution p(X) as a result of those transformations. As an example, the observed pairs $(x_i,y_i)$ then get transformed to $(g(x_i),y_i)$. This is also why all of our invariance results are of the form $pi_P(x,y)= pi_{\\\\mathcal{T}P}(\\\\mathcal{T}x,y)$, where $\\\\mathcal{T}$ is the transformation, and $\\\\mathcal{T}x$ is the transformation applied to $x$, and $\\\\mathcal{T}P$ indicates the transformed distribution as a result of $x$ changing to $\\\\mathcal{T}x$. Thus, fundamentally we will have $P(y|X)= \\\\mathcal{T}P(y|g(X))$, because of the bijection between $X$ and $g(X)$. Thus, naturally, we want our measures to be invariant to these shifts.\"}", "{\"title\": \"Response to Reviewer HF1k (Part 7)\", \"comment\": \"> Regarding L228: based on the wording it is unclear what the \\u201csample-wise margin\\u201d is supposed to be. A mathematical definition seems necessary here.\\n\\nWe note that we are using different definitions of sample-wise margin for the PSI and the PVI results in Theorem 1 and Proposition 5. This is again due to the very different factors involved in the estimation of these measures. For PSI, we can outline a very geometric definition of sample-wise margin using spheres, but for PVI, due to the presence of a potentially complex network, it becomes harder to work with the same mathematical definition and provide bounds. Instead, we note that both notions of sample-wise margin aim to capture some notion of physical distance of the datapoint to the decision boundary. In PSI\\u2019s case, the definition is geometrically accurate, whereas in PVI\\u2019s case it is an approximation based on what is used commonly in literature. \\n\\n> Regarding the margin correlation experiment: The results are confusing to me because in Table 1 it seems like the results between different measures are quite different (e.g., PMI has lower and more volatile correlations than PSI), yet the UMAPs virtually all look the same. How should I understand that?\\n\\nThe results for the different measures presented in Table 1 are based on Pearson\\u2019s correlation with margin. While the UMAP visualization for all the measures show similar results, this is because all the measures are relatively effective at ranking confidence. To further support the findings observed in the UMAP, we will include results based on Spearman\\u2019s rank correlation, which specifically evaluates the consistency of ranking.\\n| Method | MLP, MNIST | CNN, Fashion MNIST | VGG16, STL-10 | ResNet50, CIFAR10 | \\n|-|:-:|:-:|:-:|:-:|\\nPMI | 0.903$\\\\pm$0.069 | 0.938$\\\\pm$0.020 | 0.951$\\\\pm$0.005 | 0.950$\\\\pm$0.011\\nPSI | 0.927$\\\\pm$0.018 | 0.952$\\\\pm$0.006 | 0.953$\\\\pm$0.007 | 0.873$\\\\pm$0.015\\nPVI | 0.727$\\\\pm$0.096 | 0.909$\\\\pm$0.014 | 0.902$\\\\pm$0.006 | 0.606$\\\\pm$0.029\\n\\n> My main concern is about the fact that their information measures are passed through a softmax function and calibrated with temperature scaling before benchmarking against other methods. This seems like a very biased comparison, since we observe in App D.1 that these operations significantly alter the distributions of the information measures, and improve upon the considered performance metrics. It seems unreasonable to me to scale and calibrate your measures beforehand, and then claim afterwards that they provide \\\"direct probabilities\\\" and \\\"well-calibrated\\\" confidence estimates.\\n\\nRegarding the point about our approach working with direct probabilities, we believe the reviewer has raised the point before, and we have addressed it in one of our earlier response points (please refer to Response Part 2). \\n\\n> How is this a fair comparison to any baselines that are not subject to the same transformations, e.g. ML, LM? In that context, do you also apply temperature scaling to any of the other baselines such as softmax (MSP, SM)? I feel like any performance claims should rather be reported for the raw information measures instead, since it otherwise becomes unclear where any benefits stem from.\\n\\nPlease note that the other baselines (other than ML and LM) are actually also temperature scaled. As we observe in Table 12 in Appendix D.2.3, the uncalibrated result still performs better than MSP. We will be including the raw results without the temperature scaling to showcase the improvement in the performance and for fair comparison with ML and LM.\\n\\n> In the experiments on confidence calibration only two simplistic baselines are considered, and they are marginally outperformed. To claim that they are \\\"outperforming all existing baselines\\\" seems like a very strong claim in that light. It would be more meaningful to consider other baselines for confidence estimation, including those that have been subjected to a similar approach of re-calibration as they do for their own measures (using temperature scaling), e.g. isotonic regression [4], regularization [5], or other uncertainty methods like models with variance predictor [6] etc.\\n\\nWe will be including more post-hoc baselines for the confidence calibration, including the suggested isotonic regression. The other two suggested methods [5,6] are not post-hoc as they require modifying the training process. Specifically, we will be including the methods proposed in Cattelan & Silva (2024). We are currently conducting the experiments and will be including the results shortly.\", \"reference\": \"Cattelan, L. F. P., & Silva, D. (2024). How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks. The 40th Conference on Uncertainty in Artificial Intelligence.\"}" ] }
0QnKnt411O
Unsupervised Zero-Shot Reinforcement Learning via Dual-Value Forward-Backward Representation
[ "Jingbo Sun", "Songjun Tu", "qichao Zhang", "Haoran Li", "Xin Liu", "Yaran Chen", "Ke Chen", "Dongbin Zhao" ]
Online unsupervised reinforcement learning (URL) can discover diverse skills via reward-free pre-training and exhibits impressive downstream task adaptation abilities through further fine-tuning. However, online URL methods face challenges in achieving zero-shot generalization, i.e., directly applying pre-trained policies to downstream tasks without additional planning or learning. In this paper, we propose a novel Dual-Value Forward-Backward representation (DVFB) framework with a contrastive entropy intrinsic reward to achieve both zero-shot generalization and fine-tuning adaptation in online URL. On the one hand, we demonstrate that poor exploration in forward-backward representations can lead to limited data diversity in online URL, impairing successor measures, and ultimately constraining generalization ability. To address this issue, the DVFB framework learns successor measures through a skill value function while promoting data diversity through an exploration value function, thus enabling zero-shot generalization. On the other hand, and somewhat surprisingly, by employing a straightforward dual-value fine-tuning scheme combined with a reward mapping technique, the pre-trained policy further enhances its performance through fine-tuning on downstream tasks, building on its zero-shot performance. Through extensive multi-task generalization experiments, DVFB demonstrates both superior zero-shot generalization (outperforming on all 12 tasks) and fine-tuning adaptation (leading on 10 out of 12 tasks) abilities, surpassing state-of-the-art URL methods.
[ "unsupervised reinforcement learning", "zero-shot generalization", "skill discovery", "successor representation" ]
Accept (Poster)
https://openreview.net/pdf?id=0QnKnt411O
https://openreview.net/forum?id=0QnKnt411O
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vHkUc4V948", "pwKrtb7nq3", "nvh6681ekh", "jJF2ZlvUPy", "iaeDPvNa2l", "fqM0Kt9h3d", "eRCs2aybt4", "dLl7J3NNSW", "cUz4Vvf7Am", "YC5Ol6NrHU", "Y3GMD4Qyfe", "WAkl9HJNGg", "UDOnSPWnvE", "TJr1tr2v07", "T5zr3M7tbq", "QzmCjZiXSS", "Pg4nli94rR", "PJaVVn2tj2", "OrXevn4gYa", "M7DYcS2gl0", "HPdRQkTc4r", "H0kTYp9zf6", "FCCyqpXbi8", "BuMFsZSNpv", "AjZ843XD3P", "7c4vUCUSdP", "2OkTcdkQyC" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732574830265, 1732259052992, 1732268020800, 1732713618352, 1732268272797, 1732458120811, 1732269320514, 1732617805961, 1737523806268, 1732269492408, 1732267582461, 1730590687600, 1734267439771, 1732458195107, 1732457958791, 1732268880389, 1732577825558, 1732269188030, 1732710787319, 1732585027162, 1732585049734, 1732267659769, 1730678993194, 1732258482239, 1730717030413, 1732268428053, 1732268598166 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6957/Reviewer_pWXy" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Reviewer_pWXy" ], [ "ICLR.cc/2025/Conference/Submission6957/Area_Chair_ct8H" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Reviewer_yJG4" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Reviewer_U97w" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Reviewer_yJG4" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Reviewer_U97w" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ], [ "ICLR.cc/2025/Conference/Submission6957/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your thorough revisions. I appreciate that you have addressed most of the points I raised in my review. As a result, I have increased my overall rating from 5 to 6. Additionally, I have raised the presentation score from 2 to 3 due to the improvements in the clarity of the figures and pseudocode. Finally, I hope you will consider sharing your code with the research community, as it would greatly benefit others working in this area. Best regards.\"}", "{\"title\": \"Author response to review by reviewer U97w\", \"comment\": \"Thank you very much for your insightful and constructive feedback on our submission. The following are our detailed responses.\\n\\n### **Q1. The major contribution in this work is the combination of an exploration reward with FB learning, where the technical novelty is limited.**\\n\\n**A1.** As the reviewer pointed out, our method combines exploration rewards with FB learning. However, we would like to emphasize that this combination presents non-trivial technical challenges. Specifically, the value functions in FB [1] are inherently tied to implicit rewards for skill learning, and directly incorporating online exploration rewards could disrupt this skill learning capability. Our significant contributions include:\\n\\n**Scientific finding**: We systematically demonstrate that insufficient exploration in FB leads to low data diversity, which in turn results in inaccurate successor measures and limits zero-shot generalization. Our key scientific finding is that the critical challenge of online FB lies in preserving efficient FB representation learning while simultaneously increasing online data diversity.\\n\\n**Technical novelty**: To address this, we develop a novel DVFB framework that enhances exploration while maintaining effective skill learning through dual value functions and a novel intrinsic reward. Additionally, we introduce a dual-value fine-tuning scheme that achieves stable performance improvements, outperforming current SOTA methods.\\n\\nTo the best of our knowledge, **DVFB is the first approach to successfully enable both zero-shot generalization and fine-tuning capabilities in online URL settings.** We believe our insights into extending successor representation methods for online URL provide solid contributions to the field. We hope that our novelty and distinct contributions have been adequately justified.\\n\\n---\\n\\n### **Q2. Although the performance gain shown in Table 1 looks strong, I have concern on the baselines used in comparison for this setting. It is unclear why these are suitable baselines here for the problem of zero-shot online URL.**\\n\\n**A2.** Thanks for raising this important concern. To the best of our knowledge, no existing method achieves zero-shot generalization in online URL. We selected the baselines because they represent the closest existing approaches to our problem setting: **zero-shot generalization in online URL**.\\n\\n**USD methods** (e.g., BeCL, CeSD) are designed for online URL and aim to learn transferable skills. While zero-shot generalization would be ideal, they typically rely on fine-tuning in downstream tasks due to limitations in mutual information-based skill learning [2]. Our results in Table 1 demonstrate that DVFB achieves zero-shot performance comparable to their fine-tuned performance (for more results, please refer to Table 2 in the attached PDF) and outperforms them in fine-tuning.\\n\\n **Table 1: Comparison of Zero-shot and Fine-tuned Performance Across Domains**\\n\\n| Domain | Zero shot | BeCL | CeSD | DVFB | Fine-tune | BeCL* | CeSD* | DVFB |\\n|--------|-----------|------|------|------|-----------|-------|-------|------|\\n| Walker (average) | | 59 | 113 | **686** | | 708 | 668 | **852** |\\n| Quadruped (average) | | 103 | 375 | **715** | | 720 | 787 | **804** |\\n| Hopper (average) | | 1 | 3 | **101** | | 20 | 62 | **163** |\\n\\n\\n*Results marked with **\\\\*** are sourced from BeCL (ICML 23) and CeSD (ICML 24).*\\n\\n**SR methods** (e.g., LRA-SR, FB), originally designed for offline settings, represent the SOTA in offline zero-shot generalization. To ensure a fair comparison of zero-shot capabilities in online URL, we implemented their online versions based on the authors' official code.\\n\\nOur experimental setup ensures a fair comparison by evaluating all methods under identical conditions (same interaction steps and fine-tuning steps), demonstrating DVFB's superior performance in both zero-shot and fine-tuning scenarios.\"}", "{\"title\": \"Author response to review by reviewer yJG4\", \"comment\": \"Thank you for your thoughtful and constructive feedback, which will greatly help in refining our work and expanding its applicability. Here are our detailed responses.\\n\\n### **Q1. How broadly applicable is the method, particularly beyond robotic control tasks? Are there any preliminary results in other domains that the authors could include?**\\n\\n**A1.** Thank you for your insightful feedback regarding the broader applicability of DVFB beyond the DeepMind Control Suite. According to your suggestion, we conduct additional experiments on Point-Mass Maze navigation environment and Meta-World robotic manipulation environment. Although the offline methods FB-offline[1] and MCFB-offline[2] rely on pre-collected offline datasets (in contrast to online URL, high data sensitivity of offline zero-shot methods introduces a significant challenge in collecting suitable offline datasets), we provide a comparison with these offline settings for a more comprehensive evaluation of our approach. Since the Meta-World experiment is not performed in these methods, we summarize the results of FB-offline and MCFB-offline with RND offline data in Point-Mass Maze domain. As shown in Table 1, DVFB outperforms baseline methods across both domains. The additional results demonstrate that DVFB is not only effective in robotic control tasks but also generalizes well to other domains, such as navigation and robotic manipulation.\\n\\n\\n**Table 1: Performance comparison across different domains. For Point-Mass Maze, results show mean \\u00b1 standard deviation across three seeds. For Meta-World, results show success rates.**\\n| Domain | Task | FB | CIC | CeSD | FB-offline* | MCFB-offline* | DVFB |\\n|---------------|------------------|---------|---------|---------|-------------|---------------|------------|\\n| **Point-Mass**| Reach Top-left | 69 \\u00b1 6 | 18 \\u00b1 6 | 12 \\u00b1 8 | 612 | 773 | 932 \\u00b1 10|\\n| | Reach Top-right | 77 \\u00b1 95 | 5 \\u00b1 2 | 5 \\u00b1 4 | 0 | 270 | 203 \\u00b1 81 |\\n| | Reach Bottom-left | 3 \\u00b1 3 | 7 \\u00b1 4 | 18 \\u00b1 21 | 268 | 1 | 94 \\u00b1 45 |\\n| | Reach Bottom-right| 0 \\u00b1 0 | 2 \\u00b1 2 | 2 \\u00b1 2 | 0 | 0 | 4 \\u00b1 3 |\\n| | **Average** | 37.3 | 8.0 | 9.3 | 219 | 261 | **308.3** |\\n| **Meta-World**| Faucet Open | 0.18 | 0.04 | 0.00 | --- | --- | **0.60** |\\n| | Faucet Close | 0.10 | 0.18 | 0.00 | --- | --- | **0.52** |\\n\\n*Results marked with * are sourced from MCFB (NeurIPS 24).*\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for taking the time to review our revised manuscript and for your valuable feedback throughout the review process. We sincerely appreciate your recognition of the improvements made and are glad to hear that the additional clarifications and experimental results have addressed your concerns. Your constructive comments have been instrumental in enhancing the quality of our work, and we are grateful for your support and encouragement.\"}", "{\"title\": \"Continued\", \"comment\": \"### **Q2. How important is their particular choice of reward to encourage exploration -- the contrastive entropy reward? How well would other rewards stand-in for this, or is it particularly well suited?**\\n\\n**A2.** Thank you for this valuable suggestion about comparing different intrinsic rewards. Our contrastive entropy reward is designed to encourage skill discrimination during exploration, which keeps skills learned by the FB mechanism. To evaluate the role of the contrastive entropy reward, we conduct two sets of comprehensive experiments.\\n\\n**Ablation study.** We perform an ablation study on the coefficient $\\\\beta$ of contrastive entropy reward $r_{intr}=r_{rnd}+\\\\beta r_{ce}$, as shown in Table 2. The results demonstrate that increasing $\\\\beta$ from 0.1 to 0.7 consistently improves performance, validating that contrastive entropy enhances generalization by promoting skill separability. However, further increasing $\\\\beta$ negatively impacts performance due to an imbalance between skill learning and exploration.\\n\\n**Table 2: Ablation study on contrastive entropy coefficient $\\\\beta$ on Walker tasks. Results show mean \\u00b1 standard deviation across three seeds.**\\n| Task | $\\\\beta$=0.1 | $\\\\beta$=0.3 | $\\\\beta$=0.5 | $\\\\beta$=0.7 | $\\\\beta$=0.9 |\\n|--------|-------------|-------------|-------------|-------------|-------------|\\n| Stand | 819\\u00b132 | 862\\u00b19 | 905 | 898\\u00b162 | 919\\u00b19 |\\n| Walk | 819\\u00b138 | 861\\u00b118 | 900 | 926\\u00b117 | 873\\u00b132 |\\n| Flip | 428\\u00b110 | 501\\u00b118 | 515 | 616\\u00b1129 | 453\\u00b130 |\\n| Run | 344\\u00b128 | 397\\u00b140 | 423 | 434\\u00b154 | 342\\u00b135 |\\n| **Average** | 603 | 655 | 686 | **719** | 647 |\\n\\n**Comparison experiment.** Following your suggestion, we compare DVFB with variants using alternative intrinsic rewards (ICM-APT, Proto, and CIC), as shown in Table 3.\\n\\n**Table 3: Comparison with alternative intrinsic rewards on Walker tasks. Results show mean \\u00b1 standard deviation across three seeds.**\\n\\n| Task | DVFB(ICM-APT) | DVFB(Proto) | DVFB(CIC) | DVFB |\\n|--------|---------------|-------------|-----------|-----------|\\n| Stand | 883\\u00b1106 | 844\\u00b1101 | 846\\u00b174 | **905\\u00b127**|\\n| Walk | 840\\u00b185 | 821\\u00b127 | 825\\u00b124 | **900\\u00b153**|\\n| Flip | 436\\u00b168 | 454\\u00b151 | 436\\u00b1140 | **515\\u00b167**|\\n| Run | 354\\u00b115 | 358\\u00b117 | 342\\u00b114 | **423\\u00b153**|\\n| **Average** | 628 | 619 | 612 | **686** |\", \"the_results_reveal_several_key_insights\": \"1. **The scalability of DVFB.** All variants achieve reasonable zero-shot generalization performance, demonstrating DVFB's compatibility with different intrinsic rewards.\\n2. **The advantage of CE reward.** DVFB with contrastive entropy consistently outperforms other variants, achieving the highest average performance.\\n\\nThese experiments provide strong evidence for the effectiveness of the proposed contrastive entropy reward, while also demonstrating DVFB's flexibility in incorporating various intrinsic rewards.\"}", "{\"title\": \"Looking forward to further discussions!\", \"comment\": \"Dear Reviewer yJG4,\\n\\nThank you for your insightful comments. We were wondering if our response and revision have resolved your concerns. We have attempted to address your initial questions through our replies and are eager to clarify any further points you might raise. Please feel free to provide additional feedback. We greatly appreciate your continued engagement.\\n\\nBest regards, Authors\"}", "{\"title\": \"Continued\", \"comment\": \"### **Q5. Have you performed any analysis to assess the sensitivity of DVFB to essential hyperparameters? This would be beneficial to evaluate the resilience of the framework across diverse contexts and circumstances.**\\n\\n**A5.** Thank you for highlighting the importance of hyperparameter sensitivity analysis. To address this concern, we conduct a series of ablation studies to evaluate the impact of the key hyperparameters $ \\\\alpha $, $ \\\\beta $, and $ \\\\eta $ on DVFB\\u2019s performance.\\n\\n**Ablation Study on $ \\\\alpha $.**\\nTable 5 shows the results for varying $ \\\\alpha $ in the Walker domain. The experiments indicate that while changes in $ \\\\alpha $ affect performance slightly, the overall generalization performance of DVFB remains stable.\\n\\n**Table 5 Ablation Study on $ \\\\alpha $**\\n| **Task** | $ \\\\boldsymbol{\\\\alpha = 1} $ | $ \\\\boldsymbol{\\\\alpha = 3} $ | $ \\\\boldsymbol{\\\\alpha = 5} $ | $ \\\\boldsymbol{\\\\alpha = 7} $ | $ \\\\boldsymbol{\\\\alpha = 9} $ |\\n|----------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|\\n| Stand | 911 $\\\\pm$ 5 | 912 $\\\\pm$ 3 | 905 $\\\\pm$ 27 | 888 $\\\\pm$ 5 | 807 $\\\\pm$ 33 |\\n| Walk | 835 $\\\\pm$ 71 | 895 $\\\\pm$ 41 | 900 $\\\\pm$ 53 | 862 $\\\\pm$ 7 | 707 $\\\\pm$ 49 |\\n| Flip | 464 $\\\\pm$ 76 | 522 $\\\\pm$ 92 | 515 $\\\\pm$ 67 | 489 $\\\\pm$ 18 | 423 $\\\\pm$ 19 |\\n| Run | 350 $\\\\pm$ 69 | 444 $\\\\pm$ 13 | 423 $\\\\pm$ 53 | 345 $\\\\pm$ 5 | 266 $\\\\pm$ 45 |\\n| **Average** | 640 | **693** | 686 | 646 | 551 |\\n\\n\\n**Ablation Study on $ \\\\beta $.**\\nSimilarly, Table 6 reports the performance for different $ \\\\beta $ values in the Walker domain. The results reveal that DVFB exhibits only minor deviations across different values. This demonstrates the robustness of our framework to changes in $ \\\\beta $.\\n\\n**Table 6 Ablation Study on $ \\\\beta $**\\n| **Task** | $ \\\\boldsymbol{\\\\beta = 0.1} $ | $ \\\\boldsymbol{\\\\beta = 0.3} $ | $ \\\\boldsymbol{\\\\beta = 0.5} $ | $ \\\\boldsymbol{\\\\beta = 0.7} $ | $ \\\\boldsymbol{\\\\beta = 0.9} $ |\\n|----------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|\\n| Stand | 819 $\\\\pm$ 32 | 862 $\\\\pm$ 9 | 905 $\\\\pm$ 27 | 898 $\\\\pm$ 62 | 919 $\\\\pm$ 9 |\\n| Walk | 819 $\\\\pm$ 38 | 861 $\\\\pm$ 18 | 900 $\\\\pm$ 53 | 926 $\\\\pm$ 17 | 873 $\\\\pm$ 32 |\\n| Flip | 428 $\\\\pm$ 10 | 501 $\\\\pm$ 18 | 515 $\\\\pm$ 67 | 616 $\\\\pm$ 129 | 453 $\\\\pm$ 30 |\\n| Run | 344 $\\\\pm$ 28 | 397 $\\\\pm$ 40 | 423 $\\\\pm$ 53 | 434 $\\\\pm$ 54 | 342 $\\\\pm$ 35 |\\n| **Average** | 603 | 655 | 686 | **719** | 647 |\\n\\n\\n**Ablation Study on $ \\\\eta $.**\\nWe conduct an ablation study in the Quadruped domain to evaluate the sensitivity of the DVFB framework to the hyperparameter $ \\\\eta $ during the fine-tuning phase. As shown in Table 7, the overall average performance across all tasks remains consistent, indicating that DVFB is relatively resilient to changes in $ \\\\eta $.\\n\\n**Table 7 Ablation Study on $ \\\\eta $**\\n| **Task** | $ \\\\eta = 0.02 $ | $ \\\\eta = 0.1 $ | $ \\\\eta = 0.5 $ | $ \\\\eta = 1.0 $ |\\n|----------|-----------------|----------------|----------------|----------------|\\n| Stand | 957 $\\\\pm$ 4 | 964 $\\\\pm$ 6 | 965 $\\\\pm$ 7 | 954 $\\\\pm$ 10 |\\n| Walk | 891 $\\\\pm$ 32 | 908 $\\\\pm$ 30 | 908 $\\\\pm$ 21 | 886 $\\\\pm$ 8 |\\n| Jump | 830 $\\\\pm$ 18 | 838 $\\\\pm$ 11 | 831 $\\\\pm$ 20 | 835 $\\\\pm$ 8 |\\n| Run | 557 $\\\\pm$ 15 | 530 $\\\\pm$ 15 | 536 $\\\\pm$ 27 | 543 $\\\\pm$ 26 |\\n| **Average** | 809 | **810** | 804 | 804 |\\n\\nThe results of our sensitivity analysis demonstrate that DVFB is resilient to variations in key hyperparameters $ \\\\alpha $, $ \\\\beta $, and $ \\\\eta $. Furthermore, for all other neural network hyperparameters (e.g., learning rate), we adopt the default settings of URL with DDPG, ensuring consistency with prior work.\"}", "{\"title\": \"Looking forward to further discussions!\", \"comment\": \"Dear Reviewer U97w,\\n\\nThank you for taking the time to review our manuscript and for providing valuable suggestions. We have further revised the main PDF to address your main concerns more clearly, with the changes highlighted in blue.\", \"below_are_the_specific_changes_we_made_in_response_to_your_feedback\": \"1. We explain the contributions of DVFB and the scientific findings related to extending FB to online URL.\\n2. We discuss the limitations of the offline zero-shot method in Section 2 (RELATED WORK) and highlight DVFB\\u2019s advantages over SOTA offline zero-shot methods in Appendix G.\\n3. We provide a clear categorical description of the baselines in Appendix C.2 and offer guidance in Section 6 (EXPERIMENTS).\\n4. We present a detailed theoretical analysis of how the DVFB framework improves online FB\\u2019s zero-shot generalization in Appendix H, and provide guidance in Section 5 (METHODOLOGY).\\n5. We have updated the captions and explanations of the figures in Section 4.\\n6. We conduct additional experiments on the Point-Mass Maze and Meta-World benchmarks in Appendix G, along with detailed ablations in Appendix K (reward mapping technique) and Appendix L (sensitivity to hyperparameters) to further demonstrate the effectiveness of DVFB.\\n\\nAs the rebuttal period is closing, the authors would greatly appreciate it if the reviewer could consider our responses to the original review. We are more than delighted to have further discussions and improve our manuscript. If our responses have addressed your concerns, we would be grateful if you could kindly re-evaluate our work.\\n\\nBest regards, Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Continued\", \"comment\": \"### **Q6. Could you elaborate on the possible limits of DVFB, including computational complexity and scalability in practical applications? Considering these criteria would yield a more equitable perspective on the approach's practical viability.**\\n\\n**A6.** Thank you for raising questions about DVFB's practical limitations. We conduct comprehensive experiments to analyze its computational complexity and scalability across different domains.\\n\\n### **Computational Analysis.** \\nWe compare the training time of DVFB with several baseline methods on identical hardware (RTX 3090). As shown in Table 8, DVFB requires approximately 19 hours of pre-training time, which is moderate compared to other methods:\\n\\n**Table 8 Training time comparison across different methods**\\n| **Method** | **Training Time** |\\n|-------------|---------------------|\\n| CIC | ~ 12 hours |\\n| ComSD | ~ 16 hours |\\n| BECL | ~ 24 hours |\\n| CeSD | ~ 29 hours |\\n| CL | ~ 11 hours |\\n| LAP | ~ 12 hours |\\n| LRA-P | ~ 12 hours |\\n| LRA-SR | ~ 12 hours |\\n| FB | ~ 12 hours |\\n| **DVFB** | ~ 19 hours |\\n\\nWhile DVFB's training time is higher than some baselines due to its dual-value architecture, we believe this computational overhead is justified by its superior performance.\\n\\n### **Further Domain Evaluation.**\\nWe evaluate DVFB's scalability across different domains, including point-mass maze and manipulation tasks. Table 9 shows the performance comparison. The results demonstrate DVFB's generalization capabilities across different environments, consistently outperforming baseline methods.\\n\\n**Table 9 Performance comparison across different domains**\\n| **Domain** | **Task** | **FB** | **CIC** | **CeSD** | **FB-offline*** | **MCFB-offline*** | **DVFB** |\\n|-----------------|-------------------|------------|------------|------------|-----------------|-------------------|------------|\\n| **Point-Mass** | Reach Top-left | 69 \\u00b1 6 | 18 \\u00b1 6 | 12 \\u00b1 8 | 612 | 773 | 932 \\u00b1 10 |\\n| | Reach Top-right | 77 \\u00b1 95 | 5 \\u00b1 2 | 5 \\u00b1 4 | 0 | 270 | 203 \\u00b1 81 |\\n| | Reach Bottom-left | 3 \\u00b1 3 | 7 \\u00b1 4 | 18 \\u00b1 21 | 268 | 1 | 94 \\u00b1 45 |\\n| | Reach Bottom-right| 0 \\u00b1 0 | 2 \\u00b1 2 | 2 \\u00b1 2 | 0 | 0 | 4 \\u00b1 3 |\\n| | **Average** | 37.3 | 8.0 | 9.3 | 219 | 261 | **308.3** |\\n| **Meta-World** | Faucet Open | 0.18 | 0.04 | 0.00 | --- | --- | **0.60** |\\n| | Faucet Close | 0.10 | 0.18 | 0.00 | --- | --- | **0.52** |\\n\\n*Results marked with * are sourced from MCFB (NeurIPS 24) [3].*\\n\\n### **Practical Limitations.**\\nIn summary, although DVFB demonstrates exceptional zero-shot performance in simulation environments, it still has room for further optimization in terms of computation time. In addition, the effectiveness of DVFB in varied real-world contexts such as quadruped robot control, robot manipulation, and so on is still unknown and needs to be further explored.\\n\\n---\\n\\n### References\\n\\n[1] Yang Y, Zhou T, He Q, et al. Task Adaptation from Skills: Information Geometry, Disentanglement, and New Objectives for Unsupervised Reinforcement Learning. ICLR, 2024.\\n\\n[2] Touati A, Rapin J, Ollivier Y. Does zero-shot reinforcement learning exist?. ICLR, 2023.\\n\\n[3] Jeen S, Bewley T, Cullen J. Zero-Shot Reinforcement Learning from Low Quality Data. NeurIPS, 2024.\"}", "{\"title\": \"Continued\", \"comment\": \"### **Q3. A naive approach for this problem would be using a method of pure exploration or a method of skill discovery to collect an offline dataset with better data coverage than FB, then training FB on top of this dataset and testing its ability in zero-shot generalisation.**\\n\\n**A3.** We appreciate your comment for considering zero-shot offline baselines (i.e., a method of skill discovery to collect an offline dataset, then training FB on top of this dataset). Following your suggestion, we evaluate DVFB against SR methods, including LAP, LRA-SR, LRA-P, and FB, across the Walker, Quadruped, and Cheetah domains under offline settings. The performances of offline methods are from FB [1], where offline datasets are collected by APS, Proto, and RND exploration methods. The results, summarized in Appendix G (also see https://sites.google.com/view/zero-shot-generalization-perfo), highlight the following key differences between offline and online methods:\\n\\n**High data sensitivity for offline methods:** Offline methods exhibit significant performance variation depending on the quality of exploration data. On the one hand, *the same algorithm requires different exploration datasets in different domains.*\\nFor example, FB trained with Proto data in the Walker domain achieves best performance (666), while in the Quadruped domain using Proto data yields a huge performance drop (222). On the other hand, *different algorithms require different exploration datasets.* For example, LRA-P performs best with RND data, while FB performs best with APS data. **_When designing a novel algorithm, how to make sure what kind of exploration dataset is most suitable?**\\nAn intuitive idea is to train the models on different exploration datasets, and compare to find the best performance models. Obviously, it will lead to **high computational costs**.\\nIn contrast, DVFB does not depend on offline datasets, and requires only a single agent pre-training phase to achieve strong zero-shot capability, significantly reducing time and computational overhead. It is simpler and easier to deploy than offline zero-shot methods. \\n \\n**Performance limitation on pre-collected fixed dataset:** Offline methods rely on the diversity and quality of fixed pre-collected datasets, which limits their generalization performance. In contrast, DVFB balances the exploration and exploitation online with an intrinsic reward based on contrastive learning, leading to enhanced skill learning and better zero-shot performance. Experimental results across **twelve tasks in Mujoco domains** demonstrate that DVFB consistently outperforms zero-shot offline methods.\\n \\nIn summary, the results demonstrate that DVFB offers superior performance and efficiency compared to both offline and online methods, establishing its significance in zero-shot online URL.\\n\\n---\\n### **Q4. Following the previous comment, for Table 1, it would be better to group the baselines into several categories so that it is clear from the table which property (zero-shot, online, offline, exploration or skill discovery) each method has or does not have.**\\n\\n**A4.** We agree that grouping the baselines according to their properties will enhance clarity. We categorize the methods into several groups based on their key properties as shown in the table below. We hope this categorization makes it easier to understand the key properties of each method and their relationships.\\n\\n**Table 2 Properties of Different Methods**\\n\\n| Method | Publish | Zero-shot | Online | Offline | Exploration | Skill Discovery |\\n|----------------------|------------|-----------|--------|---------|-------------|-----------------|\\n| **Successor Representation Methods** | | | | | | |\\n| CL | NeurIPS 22 | \\u2714 | \\u2718 | \\u2714 | \\u2718 | \\u2714 |\\n| Lap | ICLR 18 | \\u2714 | \\u2718 | \\u2714 | \\u2718 | \\u2714 |\\n| LRA-P | ICLR 23 | \\u2714 | \\u2718 | \\u2714 | \\u2718 | \\u2714 |\\n| LRA-SR | ICLR 23 | \\u2714 | \\u2718 | \\u2714 | \\u2718 | \\u2714 |\\n| FB | ICLR 23 | \\u2714 | \\u2718 | \\u2714 | \\u2718 | \\u2714 |\\n| **Unsupervised Skill Learning Methods** | | | | | | |\\n| CIC | NeurIPS 22 | \\u2718 | \\u2714 | \\u2718 | \\u2714 | \\u2714 |\\n| BeCL | ICML 23 | \\u2718 | \\u2714 | \\u2718 | \\u2714 | \\u2714 |\\n| ComSD | Arxiv 23 | \\u2718 | \\u2714 | \\u2718 | \\u2714 | \\u2714 |\\n| CeSD | ICML 24 | \\u2718 | \\u2714 | \\u2718 | \\u2714 | \\u2714 |\\n| **DVFB (ours)** | | \\u2714 | \\u2714 | \\u2718 | \\u2714 | \\u2714 |\"}", "{\"summary\": \"The study presents the Dual-Value Forward-Backward (DVFB) framework for zero-shot generalization in online unsupervised reinforcement learning (URL). DVFB integrates a skill value function with an exploration value function to enhance data diversity and generalization in the absence of task-specific rewards. It utilizes a contrastive entropy intrinsic reward to improve exploration and a dual-value fine-tuning method to optimize downstream task performance, claiming good results in continuous control tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The research presents a novel Dual-Value Forward-Backward (DVFB) paradigm that integrates skill and exploratory value functions to improve data variety and zero-shot generalization in online URL, providing an innovative method for reward-free learning.\\n\\n2. Should the suggested DVFB approach demonstrate efficacy, it may rectify a basic constraint in reinforcement learning by facilitating zero-shot generalization absent task-specific incentives, hence potentially enabling RL agents to adapt more readily to varied real-world contexts.\", \"weaknesses\": \"1. The experimental configuration and illustrations are challenging to interpret, with scant explanation offered for particular measures and comparisons. Enhanced labeling, elucidation of axes and benchmarks, and uniform layout throughout figures would facilitate comprehension of the data and augment the paper's readability. Figure 6 has mixed x-axis labels, which needs an improvement. Legends can be bigger w/o affecting the size of total figure for example Figure 7.\\n\\n2. The method depends on several essential network hyperparameters given in Table-3 yet the research fails to analyze the sensitivity of the results to these selections. An investigation of network hyperparameter sensitivity would enhance confidence in the robustness and generalizability of the findings.\\n\\n3.The implementation and/or utilization of the reward mapping technique for fine-tuning can be clarified. Integrating pseudocode would improve the accessibility and reproducibility of this component.\\n\\n4. The report omits a discussion of potential limitations, including computing cost, scalability, and difficulty in real-world implementation. Recognizing these factors might yield a more equitable viewpoint and inform subsequent research.\", \"questions\": \"1. Could you furnish more detailed explanations regarding the metrics employed in the studies, especially for figures where the axes and comparisons lack clarity? Supplementary labeling and contextual information would assist readers in appropriately interpreting your findings.\\n\\n2. What is the performance of DVFB in relation to other contemporary zero-shot generalization methods, and what are the reasons for the selection or exclusion of specific baselines? Incorporating a broader array of comparisons or elaborating on these selections would bolster the assertion of enhanced performance.\\n\\n3. Could you provide a detailed explanation of the practical execution of the reward mapping technique, possibly including pseudocode? Additional detail would elucidate this component's impact during fine-tuning.\\n\\n4. In what manner does the contrastive entropy reward facilitate skill differentiation, and can you present empirical data that substantiates its efficacy? An elucidation or ablation of the role of this reward would improve comprehension.\\n\\n5. Have you performed any analysis to assess the sensitivity of DVFB to essential hyperparameters? This would be beneficial to evaluate the resilience of the framework across diverse contexts and circumstances.\\n\\n6. Could you elaborate on the possible limits of DVFB, including computational complexity and scalability in practical applications? Considering these criteria would yield a more equitable perspective on the approach's practical viability.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces the Dual-Value Forward-Backward (DVFB) framework for unsupervised reinforcement learning (URL), aimed at achieving both zero-shot generalization and fine-tuning adaptation. The method combines a skill value function with an exploration value function to improve data diversity and generalization, addressing the limitations of forward-backward representations in online settings. A contrastive entropy intrinsic reward enhances exploration, while a dual-value fine-tuning scheme optimizes performance on downstream tasks. Experimental results show that DVFB outperforms existing methods in both zero-shot generalization and fine-tuning across multiple tasks.\\n\\nThe strengths of this paper include a well-motivated and clear method, impressive results (including additional results provided during the rebuttal), and clear writing. The main weaknesses of this paper include concerns about the novelty of combining exploration reward with FB learning, as raised by U97w, and justifications for the reward and reward mapping techniques.\\n\\nThe three reviews are all positive (6, 6, and 8), suggesting acceptance of this paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer U97w asked for more evaluation results, and the authors provided additional results, which led the reviewer to increase their score.\\nReviewer pWXy also raised their rating due to the extra results and improved presentation. \\nBoth reviewers yJG4 and pWXy requested that the code be released, and the authors have promised to do so.\"}", "{\"title\": \"Looking forward to further discussions!\", \"comment\": \"Dear Reviewer pWXy,\\n\\nThank you for your insightful comments. We were wondering if our response and revision have resolved your concerns. We have attempted to address your initial questions through our replies and are eager to clarify any further points you might raise. Please feel free to provide additional feedback. We greatly appreciate your continued engagement.\\n\\nBest regards, Authors\"}", "{\"title\": \"Looking forward to further discussions!\", \"comment\": \"Dear Reviewer U97w,\\n\\nThank you for your insightful comments. We were wondering if our response and revision have resolved your concerns. We have attempted to address your initial questions through our replies and are eager to clarify any further points you might raise. Please feel free to provide additional feedback. We greatly appreciate your continued engagement.\\n\\nBest regards, Authors\"}", "{\"title\": \"Continued\", \"comment\": \"### **Q2. What is the performance of DVFB in relation to other contemporary zero-shot generalization methods, and what are the reasons for the selection or exclusion of specific baselines? Incorporating a broader array of comparisons or elaborating on these selections would bolster the assertion of enhanced performance.**\\n\\n**A2.** We appreciate the reviewer's valuable suggestion regarding the comparison with contemporary zero-shot methods. In this paper, we address zero-shot generalization in online URL and aim to develop a method that combines both zero-shot generalization capability and fine-tuning ability. To the best of our knowledge, no existing method achieves zero-shot generalization in online URL. Therefore, we choose the baselines as they represent the closest existing approaches to our problem setting (**zero-shot generalization in online URL**).\\n\\n**USD methods** (e.g., BeCL, CeSD) are designed for online unsupervised pre-training and aim to learn transferable skills. While zero-shot generalization would be ideal, they typically rely on fine-tuning in downstream tasks due to limitations in mutual information-based skill learning[1]. Our results in Table 1 demonstrate that DVFB achieves zero-shot performance comparable to their fine-tuned performance, and DVFB outperforms them in fine-tuning.\\n\\n**Table 1 Comparison of Zero-shot and Fine-tune Performance**\\n| Domain | Zero shot | BeCL | CeSD | DVFB | Fine-tune | BeCL* | CeSD* | DVFB |\\n|--------|-----------|------|------|------|-----------|-------|-------|------|\\n| Walker (average) | | 59 | 113 | **686** | | 708 | 668 | **852** |\\n| Quadruped (average) | | 103 | 375 | **715** | | 720 | 787 | **804** |\\n| Hopper (average) | | 1 | 3 | **101** | | 20 | 62 | **163** |\\n*Results marked with * are sourced from BeCL (ICML 23) and CeSD (ICML 24).*\\n\\n**SR methods** (e.g., LRA-SR,FB), while originally designed for offline settings, represent SOTA in zero-shot generalization. To ensure a fair comparison of zero-shot capabilities, we implemented their online versions following the authors' official code.\\n\\nOur experimental setup ensures fair comparison by evaluating methods under identical conditions (same interaction and fine-tuning steps), demonstrating DVFB's superior performance in both zero-shot and fine-tuning scenarios.\\n\\n**Baseline Categories.** To make it easier to understand the key properties of each method and their relationships, we categorize the methods into several groups based on their key properties as shown in Appendix C.2 (also see https://sites.google.com/view/baselinestable).\\n\\n**Comparison with offline settings.** Original offline SR methods rely on pre-collected datasets, which diverges from our online URL setup. Hence, such methods were excluded from our primary comparisons. However, we conduct further experiments to provide a broader understanding of the relative advantages of DVFB. We compare DVFB against SR methods, including LAP, LRA-SR, LRA-P, and FB, across the three domains under offline settings. The performances of offline methods are from FB [2], where offline datasets are collected by APS, Proto, and RND exploration methods. The results, summarized in Appendix G (also see https://sites.google.com/view/zero-shot-generalization-perfo), highlight the key differences between offline and online methods:\\n\\n**High data sensitivity for offline methods:** Offline methods exhibit significant performance variation depending on the quality of exploration data. On the one hand, the same algorithm requires different exploration datasets in different domains. For example, FB trained with Proto data in the Walker domain achieves best performance (666), while in the Quadruped domain using Proto data yields a huge performance drop (222). On the other hand, different algorithms require different exploration datasets. For example, LRA-P performs best with RND data, while FB performs best with APS data. **When designing a novel algorithm, how to make sure what kind of exploration dataset is most suitable?** The intuitive idea is to train the models on different exploration datasets, and compare to find the best performance models. Obviously, it will lead to high computational costs. In contrast, DVFB does not depend on offline datasets, and requires only a single agent pre-training phase to achieve zero-shot capability, significantly reducing time and computational overhead.\\n\\n**Performance limitation on pre-collected fixed datasets:** Offline methods rely on the diversity and quality of fixed pre-collected datasets, which limits their generalization performance. In contrast, DVFB balances the exploration and exploitation online with an intrinsic reward based on contrastive learning, leading to enhanced skill learning and better zero-shot performance. \\n\\nIn summary, the results demonstrate that DVFB offers superior performance and efficiency compared to both offline and online methods, establishing its significance in zero-shot online URL.\"}", "{\"comment\": \"Many thanks for the detailed responses to my and the other reviewers comments and concerns. The extra experiments and ablations are interesting and fill in the gaps in presentation. It's also very promising to see that DVFB is effective in tasks beyond robotic control. Given the extra results and proposed changes, I am confident with my original scoring and still believe that this a good paper that would be a valuable addition to ICLR.\"}", "{\"title\": \"Continued\", \"comment\": \"### **Q4. In what manner does the contrastive entropy reward facilitate skill differentiation, and can you present empirical data that substantiates its efficacy? An elucidation or ablation of the role of this reward would improve comprehension.**\\n\\n**A4.** We agree that a deeper analysis of contrastive entropy reward would strengthen our work.\\n\\n**Contrastive Entropy Reward.**\", \"our_contrastive_entropy_reward_is_specifically_designed_to_encourage_exploration_while_keeping_skill_separability_through_two_mechanisms\": \"- **Skill Discriminator.** We employ contrastive learning to train a skill discriminator that learns representation for trajectories and skills. The dot product between trajectory representation and skill representation serves as a similarity measure. The learned discriminator is used to distinguish skill-trajectory pairs.\\n- **Contrastive Entropy.** We maximize a particle entropy computed from the dissimilarity between a trajectory and its N most related skills. This encourages the agent to keep skill separability while exploring new states.\\n\\n**Ablation Study.**\\nWe perform an ablation study on the coefficient \\\\(\\\\beta\\\\) of contrastive entropy reward \\\\(r_{intr} = r_{rnd} + \\\\beta r_{ce}\\\\), as shown in Table 3. The results demonstrate that increasing \\\\(\\\\beta\\\\) from 0.1 to 0.7 consistently improves performance, validating that contrastive entropy enhances generalization by promoting skill separability. However, further increasing \\\\(\\\\beta\\\\) negatively impacts performance due to an imbalance between skill learning and exploration.\\n\\n**Table 3 Ablation study on contrastive entropy coefficient $\\\\beta$ on Walker tasks. Results show mean \\u00b1 standard deviation across three seeds.**\\n| Task | \\\\(\\\\beta = 0.1\\\\) | \\\\(\\\\beta = 0.3\\\\) | \\\\(\\\\beta = 0.5\\\\) | \\\\(\\\\beta = 0.7\\\\) | \\\\(\\\\beta = 0.9\\\\) |\\n|------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Stand | 819\\u00b132 | 862\\u00b19 | 905 | 898\\u00b162 | 919\\u00b19 |\\n| Walk | 819\\u00b138 | 861\\u00b118 | 900 | 926\\u00b117 | 873\\u00b132 |\\n| Flip | 428\\u00b110 | 501\\u00b118 | 515 | 616\\u00b1129 | 453\\u00b130 |\\n| Run | 344\\u00b128 | 397\\u00b140 | 423 | 434\\u00b154 | 342\\u00b135 |\\n| **Average** | 603 | 655 | 686 | **719** | 647 |\\n\\n **Comparison Experiment.**\\nWe further compare DVFB with its variants using alternative intrinsic rewards (ICM-APT, Proto, and CIC), as presented in Table 4.\\n\\n**Table 4 Comparison with alternative intrinsic rewards on Walker tasks. Results show mean \\u00b1 standard deviation across three seeds.**\\n| Task | DVFB (ICM-APT) | DVFB (Proto) | DVFB (CIC) | DVFB |\\n|-------|-----------------|-----------------|-----------------|-----------------|\\n| Stand | 883\\u00b1106 | 844\\u00b1101 | 846\\u00b174 | **905\\u00b127** |\\n| Walk | 840\\u00b185 | 821\\u00b127 | 825\\u00b124 | **900\\u00b153** |\\n| Flip | 436\\u00b168 | 454\\u00b151 | 436\\u00b1140 | **515\\u00b167** |\\n| Run | 354\\u00b115 | 358\\u00b117 | 342\\u00b114 | **423\\u00b153** |\\n| **Average** | 628 | 619 | 612 | **686** |\", \"the_results_reveal_several_key_insights\": \"1. **The scalability of DVFB.** All variants achieve reasonable zero-shot generalization performance, demonstrating DVFB's compatibility with different intrinsic rewards.\\n2. **The advantage of CE reward.** DVFB with contrastive entropy consistently outperforms other variants, achieving the highest average performance.\\n\\nThese experiments provide evidence for the effectiveness of our contrastive entropy design, while also demonstrating DVFB's flexibility in incorporating different intrinsic rewards.\"}", "{\"comment\": \"I would like to thank the authors for providing further clarifications and additional experimental results. These results have addressed most of my concerns. The paper has also shown improvements after the revision. Consequently, I have increased my overall score by 1.\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for your thoughtful feedback and for taking the time to assess our revisions. We are glad to see that we could address your concerns. We sincerely appreciate your recognition of the paper\\u2019s contribution and your positive evaluation of our work!\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for taking the time to help us improve our work. We are pleased to see that we could address your concerns. We sincerely appreciate you raising the score on our work. We are currently finalizing the code and its documentation to ensure ease of use and plan to release it publicly upon the paper\\u2019s acceptance. Once again, thank you for your constructive feedback and support!\"}", "{\"title\": \"Continued\", \"comment\": \"### **Q5. It is unclear what guarantee of zero-shot generalization of the proposed method can have.**\\n\\n**A5.** Thank you for your insightful feedback. We agree that providing a theoretical foundation for the proposed method is crucial to support its objectives and claims. Below, we address your concerns by offering a theoretical guarantee for zero-shot generalization and a detailed analysis of how the DVFB framework improves FB\\u2019s zero-shot generalization in online URL.\\nTheoretical guarantee for zero-shot generalization and theoretical analysis is shown in Appendix H (also see https://sites.google.com/view/theoretical-gurantee).\\n\\nAccording to our experimental results, the DVFB framework consistently outperforms FB across diverse unseen tasks. These findings confirm the zero-shot generalization capability of the DVFB framework in online URL.\\n\\n---\\n\\n### **Q6. For Fig. 2 and Fig. 3, Please add more descriptions to the caption so that the reader can understand the main discovery and meaning of the figure.**\\n\\n**A6.** Thank you for your suggestion. We agree that the captions for Figures 2 and 3 could be clearer. We revise them as follows:\\n\\n**Figure 2**: The $x$ axis \\\"position\\\" means the walking distances for different trajectories, and short-distance trajectories reflect less diverse exploration outcomes. The $y$ axis \\\"skill index\\\" means the index of different skills. Lines of the same color represent trajectories corresponding to the same skill. The revised caption is:\\n\\n> \\\"The trajectories of the RND, FB-online, and FB-offline agents in the *Walker walk* task. The FB-online agent learns short-range trajectories, in contrast to the RND agent's diverse exploration and the FB-offline agent's ability to master long-range locomotion skills.\\\"\\n\\n**Figure 3**: In (a), the x-axis represents time steps during pre-training, and the y-axis shows the episode rewards of agents. In (b), the x-axis represents trajectories categorized by episode reward ranges. The y-axis in the top figure indicates the skill values, while the y-axis in the bottom figure depicts the Spearman correlation between skill values and episode rewards. The revised caption is:\\n\\n> \\\"Generalization curve and value function properties. (a) presents the agents' performance during pre-training, while (b) illustrates the normalized skill value and the Spearman correlation between skill value and return across different return ranges.\\\"\\n\\nWe have updated the captions and explanations of figures in the revised PDF.\\n\\n\\n---\\n\\n### References\\n\\n1. Touati A, Rapin J, Ollivier Y. Does zero-shot reinforcement learning exist?. ICLR, 2023.\\n\\n2. Yang Y, Zhou T, He Q, et al. Task Adaptation from Skills: Information Geometry, Disentanglement, and New Objectives for Unsupervised Reinforcement Learning. ICLR, 2024.\"}", "{\"summary\": \"This work introduces the Dual-Value Forward-Backward (DVFB) representation framework for unsupervised reinforcement learning (URL). It tackles the challenge of enabling agents to generalise to new tasks without further training (zero-shot generalisation) in addition to fine-tuning adaptation in online settings.\\n\\nIt builds on successor representation (SR)-based approaches which aim to learn a representation of expected future states and have been shown to aid zero-shot generalisation in RL. In particular, the work extends forward-backward (FB) representations by learning both forward and backward dynamics. The authors explore failures in FB-based approaches in online URL settings and find that it is due to inadequate exploration. They address this by introducing an intrinsic reward based on contrastive learning to encourage exploration, combining this \\u201cexploration value\\u201d function to the usual \\u201cskill value\\u201d function to arrive at their DVFB. The authors also introduce a fine-tuning scheme using a reward mapping technique to add further online adaptation capabilities to their method. \\n\\nThe authors validate DVFB across twelve robot control tasks in the DeepMind Control Suite and demonstrate the approach gives both superior zero-shot and fine-tuning performance.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Zero-shot generalisation in online settings is an important problem in RL, where progress is going to be essential for successfully deploying RL in real-world applications. DVFB advances the field's understanding of how to create agents that can solve and adapt to new tasks immediately, without requiring extensive retraining.\", \"The authors build on foundational concepts in the field such as SR and FB representations. The paper does a good job identifying the limitations of FB in online settings, pinpointing insufficient exploration as a core issue, and using their insights to justify the extensions of FB into DVFB. The introduction of a novel exploration value function alongside the skill value function is an original approach that enhances exploration and, as shown in their motivation and results, improves generalisation. Furthermore, the addition of a reward mapping is a valuable addition that enables them to demonstrate both zero-shot generalisation and fine-tuning in an online setting.\", \"Impressive results: the paper presents rigorous experiments across 12 diverse control tasks in a widely used benchmark for tasks requiring fine-grained motor control. In terms of zero-shot performance, their method outperforms baseline methods across the 12 tasks, particularly in tasks where others struggle with exploration (further supporting their motivation). It also outperforms on fine-tuning performance, showing faster adaptation and greater stability compared to the state-of-the art in URL.\", \"The paper is well written, guiding the reader through the problem being addressed, relevant related work, the motivations for their extensions, implementation, results and conclusions. The methodology section is nicely laid out, with clear explanations and schematics detailing components of the model. Their experimental results are clearly presented and easy to understand.\"], \"weaknesses\": [\"Potential for broader applicability: the paper focuses on tasks in the DeepMind Control Suite. This demonstrates DVFB\\u2019s capability in robotic control tasks, but leaves one wondering about the framework's versatility which otherwise seems very general. Could the authors discuss the potential for adapting DVFB to other domains, such as navigation? If possible, preliminary results or discussion on expected performance in different contexts would broaden the scope of the work.\", \"Other intrinsic rewards: the paper attributes improvements to enhanced exploration, but it doesn\\u2019t delve into specific advantages that contrastive entropy is providing over other intrinsic rewards. Going beyond the DVFV w/o CE ablation experiment and trying out other intrinsic rewards (beyond just RND rewards) could add further insight into their particular choice of contrastive entropy.\", \"Sparse presentation of reward mapping technique: there\\u2019s limited detail and justification for the reward mapping technique. It\\u2019s unclear whether this particular mapping method is optimal or if other strategies might perform equally well or even better in different tasks. Further exploration would clarify its effectiveness and limitations. Could the authors discuss more justification for this approach, as well as analysing some alternatives?\", \"Reproducibility: lack of code to reproduce the results: providing code would significantly enhance the paper\\u2019s accessibility. While the inclusion of pseudocode and hyperparameters is appreciated and provides important details for the method, the absence of actual code makes it challenging for others to fully replicate the experiments or apply the DVFB framework to other settings.\"], \"questions\": [\"Reflecting the weaknesses discussed above, my key questions are:\", \"How broadly applicable is the method, particularly beyond robotic control tasks? Are there any preliminary results in other domains that the authors could include?\", \"How important is their particular choice of reward to encourage exploration -- the contrastive entropy reward? How well would other rewards stand-in for this, or is it particularly well suited?\", \"Similar questions for the reward mapping technique. Could we see more justification for their approach and other alternatives explored?\", \"Can the authors provide code so that others can directly reproduce the results?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Meta response from the authors\", \"comment\": \"We sincerely thank all reviewers for their constructive feedback (we refer to U97w as R1, yJG4 as R2, and pWXy as R3). We are grateful that **most reviewers positively acknowledge our overall contributions**, such as *\\\"identifying the limitations of FB in online settings\\\"* and *\\\"a novel exploration value... is an original approach\\\"* (R2), as well as *\\\"presents a novel DVFB paradigm... providing an innovative method for...\\\"* (R3). Furthermore, the **superior performance** of our approach has been highlighted: *\\\"achieved improved performance\\\"* (R1) and *\\\"impressive results, faster adaptation and greater stability\\\"* (R2).\\n\\nTo address the concerns raised, we make a series of major revisions, summarized below:\\n\\n1. **Appendix C.2**: We categorize the baseline methods based on their properties, facilitating a clearer understanding of their key characteristics and interrelationships.\\n\\n2. **Appendix G**: We conduct new experiments in the DMC benchmark to compare with popular offline zero-shot methods and analyze the advantages of zero-shot online URL over zero-shot offline URL.\\n\\n3. **Appendix H**: We present a theoretical analysis to substantiate the zero-shot generalization capability of our proposed method.\\n\\n4. **Appendix I**: We extend the evaluation of DVFB to additional navigation and robotic manipulation domains, showcasing its potential for broader applicability.\\n\\n5. **Appendix J**: We conduct a comparative study of the contrastive entropy reward against other reward functions, including ICM-APT, Proto, and CIC, to analyze its significance and impact.\\n\\n6. **Appendix K**: We provide a detailed implementation of the reward mapping technique and analyze its significance through comparisons with alternative approaches.\\n\\n7. **Appendix L**: We perform additional ablation studies to evaluate the sensitivity of DVFB to hyperparameters.\\n\\n8. **Figures and Explanations**: We revise the figures in the paper and add detailed explanations for both the figures and the experiments to improve the readers' understanding.\\n\\nIn summary, we have **significantly extended the empirical evaluations** in the revised manuscript. Importantly, the results of these comprehensive experiments are **generally consistent with the observations and conclusions from our original submission**. Please refer to the revised manuscript (highlighted changes are in blue) for details. Additionally, the responses to specific reviewer comments offer further clarification on these updates.\\n\\nPlease let us know if we have sufficiently addressed your concerns. We are happy to engage in further discussions and make additional improvements. If our response meets your expectations, we would greatly appreciate a re-evaluation of our work.\"}", "{\"summary\": \"This work presented a pre-training framework for zero-shot reinforcement learning by leveraging forward-backward (FB) representation. Unlike some previous study on zero-shot RL, this work analysed the performance gap of FB in the online learning setting compared with a specific exploration strategy. The authors then proposed a new exploration reward based on contrastive learning and incorporated this into FB traning by end-to-end online learning. The proposed method is evaluated in zero-shot online URL and fine tuning settings. Experimental results suggest that it achieved improved performance than some baseline methods given limited interactions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This work is well motivated and the promotion of exploratory behaviour during the pre-training phase to increase the data coverage is reasonable.\\n\\nThe paper is well written and easy to follow.\", \"weaknesses\": \"The major contribution in this work is the combination of an exploration reward with FB learning, where the technical novelty is limited.\\n\\nAlthough the performance gain shown in Table 1 looks strong, I have concern on the baselines used in comparison for this setting. It is unclear why these are suitable baselines here for the problem of zero-shot online URL. Many baselines here are either not designed for online learning with self-generated trajectory (for example, LRA-SR used in (Touati et al., 2023)) or not zero-shot testing (if I\\u2019m not mistaken some baseline finetunes longer steps, for example, CeSD with 100k interactions rather than 1e^4 used in this work). So it does not look so exciting when explicit exploration techniques are used in combination with a zero-shot technique. A naive approach for this problem would be using a method of pure exploration (e.g. r_{ce} proposed in this work as the proposed intrinsic reward itself has the capability to collect an offline dataset.) or a method of skill discovery to collect an offline dataset with better data coverage than FB, then training FB on top of this dataset and testing its ability in zero-shot generalisation. This could probably better demonstrate the advantage of combining exploration reward with FB in online URL.\\n\\nFollowing the previous comment, for Table 1, it would be better to group the baselines into several categories so that it is clear from the table which property (zero-shot, online, offline, exploration or skill discovery) each method has or does not have.\\n\\nThere is no theoretical analysis to support the proposed objective function and reward function either in the FB pre-training stage or fine-tuning stage. It is unclear what guarantee of zero-shot generalisation of the proposed method can have.\", \"questions_and_suggestions\": \"For Fig. 2 and Fig. 3, Please add more descriptions to the caption so that the reader can understand the main discovery and meaning of the figure.\", \"questions\": \"Please see the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### **Q3. Similar questions for the reward mapping technique. Could we see more justification for their approach and other alternatives explored?**\\n\\nWe appreciate the reviewer\\u2019s thoughtful feedback on the reward mapping technique. In response, we provide a detailed justification and explore potential alternatives during our experiments.\\n\\n**Potential fine-tuning techniques.** For fine-tuning, our objective is to ensure stable policy improvements over zero-shot performance. The reward mapping is designed to achieve stable improvement by balancing the importance of prior skill value $Q_M$ and downstream task value $Q_F$. A straightforward alternative is directly using the downstream task rewards $r_t$ for the task value, and choosing a suitable coefficient $\\\\eta$ in Eq.11 to balance the value of $Q_M$ and $Q_F$. We call it DVFB w/o MAP, using dual-value fine-tuning with the downstream task value based on downstream task rewards $r_t$ rather than $r_f$. We also explore an adaptive method to dynamically adjust the $\\\\eta$ parameter by setting $\\\\eta = \\\\frac{Q}{MQ}$, which we refer to as DVFB w/o MAP adaptive.\\n\\n**Experimental results.** We conduct comparative experiments in the quadruped domain, with the results presented in https://sites.google.com/view/figureeta and Table 4. The findings show that DVFB w/o MAP achieves stable but limited improvement with higher coefficients, while exhibiting unstable performance with lower coefficients, ultimately constraining overall performance. It is also challenging for DVFB w/o MAP adaptive to achieve superior improvement due to the nonlinear relationship between skill value and downstream value. In contrast, DVFB with the reward mapping technique consistently delivers stable and superior improvements across all tasks. These results validate the effectiveness of our chosen approach.\\n\\n**Table 4: Performance comparison of different fine-tuning approaches on the quadruped domain.\\nResults show mean \\u00b1 std over three runs.**\\n\\n| Task | $\\\\eta = 0.02$ | $\\\\eta = 0.1$ | $\\\\eta = 0.5$ | adaptive $\\\\eta$ | DVFB |\\n|--------|----------------|--------------|--------------|-----------------|----------|\\n| Stand | 954$\\\\pm$5 | 951$\\\\pm$10 | 961$\\\\pm$8 | 960$\\\\pm$5 | 965$\\\\pm$7|\\n| Walk | 753$\\\\pm$32 | 765$\\\\pm$18 | 752$\\\\pm$31 | 820$\\\\pm$77 | 908$\\\\pm$21|\\n| Jump | 819$\\\\pm$28 | 811$\\\\pm$16 | 784$\\\\pm$59 | 830$\\\\pm$12 | 831$\\\\pm$20|\\n| Run | 496$\\\\pm$23 | 491$\\\\pm$4 | 490$\\\\pm$14 | 496$\\\\pm$48 | 536$\\\\pm$27|\\n| **Average** | 756 | 755 | 747 | 777 | **804** |\\n\\n**Justification for the reward mapping technique.** \\nThe reward mapping mechanism in DVFB helps balance prior skill value and downstream task value, enabling stable and effective fine-tuning. Our experiments demonstrate that this approach significantly outperforms alternatives. The consistent improvement across tasks provides strong evidence for the effectiveness of this technique. We agree that exploring other effective alternative options is a meaningful topic.\\n\\n---\\n\\n### **Q4. Can the authors provide code so that others can directly reproduce the results?**\\n\\n**A4.** Thank you for your suggestion regarding code availability to ensure reproducibility. We are committed to making all relevant code and implementation details publicly available upon acceptance of the paper to facilitate reproducibility and further research. We appreciate your understanding and support for this approach. \\n\\n---\\n\\n### References\\n\\n1. Touati A, Rapin J, Ollivier Y. Does zero-shot reinforcement learning exist?. *ICLR*, 2023. \\n2. Jeen S, Bewley T, Cullen J. Zero-Shot Reinforcement Learning from Low Quality Data. *NeurIPS*, 2024.\", \"title\": \"Continued\"}", "{\"title\": \"Author response to review by reviewer pWXy\", \"comment\": \"Thank you for your thoughtful and detailed feedback, which will greatly help improve our work. Below are our detailed responses.\\n\\n### **Q1. Could you furnish more detailed explanations regarding the metrics employed in the studies, especially for figures where the axes and comparisons lack clarity? Supplementary labeling and contextual information would assist readers in appropriately interpreting your findings.**\\n\\n**A1.** Thank you for your detailed feedback. We agree that these suggestions will enhance the clarity and readability of the paper. In response, we have added more detailed explanations for the figures and metrics used in the studies. Specifically, we have standardized the x-axis labels in Figure 6 and enlarged the legend fonts in Figure 7, while ensuring that the overall figure dimensions remain consistent. These adjustments aim to make the figures and their comparisons easier to interpret for readers.\\n\\n---\\n\\n### **Q3. Could you provide a detailed explanation of the practical execution of the reward mapping technique, possibly including pseudocode? Additional detail would elucidate this component's impact during fine-tuning.**\\n\\n**A3.** The reward mapping technique in our framework is designed to balance the influence of the skill value function with the downstream task value function, ensuring stable and effective fine-tuning. Below, we provide a detailed explanation of this technique and demonstrate its impact through additional experiments.\\n\\n**Reward Mapping Implementation.**\", \"the_reward_mapping_process_involves_three_key_steps\": \"1. Compute the implicit reward using the backward network (backward_net) and a latent variable (z).\\n2. Normalize both implicit and extrinsic rewards using running mean and standard deviation trackers.\\n3. Rescale the extrinsic reward to align with the scale of the implicit reward.\\n\\nThe detailed implementation and pseudocode is shown in Appendix K (also see https://sites.google.com/view/rewardmapping).\\n\\n**Potential Fine-Tuning Techniques.**\\nFor fine-tuning, our objective is to ensure stable policy improvements over zero-shot performance. The reward mapping is designed to achieve stable improvement by balancing the importance of prior skill value $Q_M$ and downstream task value $Q_F$. A straightforward alternative is directly using the downstream task rewards $r_t$ for the task value, and choosing a suitable coefficient $\\\\eta$ in Eq.11 to balance the value of $Q_M$ and $Q_F$. We call this approach **DVFB w/o MAP**, using a dual-value fine-tuning scheme with the downstream task value based on downstream task rewards $r_t$ rather than $r_f$. We also explore an adaptive method to dynamically adjust the $\\\\eta$ parameter by setting $\\\\eta = \\\\frac{Q}{MQ}$, which we refer to as **DVFB w/o MAP adaptive**.\\n\\n**Experimental Results.**\\nWe conducted comparative experiments in the quadruped domain, with the results presented in Appendix K (also see https://sites.google.com/view/figureeta) and Table 2. The findings show that DVFB w/o MAP achieves stable but limited improvement with higher coefficients, while exhibiting unstable performance with lower coefficients, ultimately constraining overall performance. It is also challenging for DVFB w/o MAP adaptive to achieve superior improvement due to the nonlinear relationship between skill value and downstream value. In contrast, DVFB with the reward mapping technique consistently delivers stable and superior improvements across all tasks. These results validate the effectiveness of our chosen approach.\\n\\n**Table 2 Performance Comparison of Different Fine-Tuning Approaches**\\n| Task | DVFB w/o MAP ($\\\\eta = 0.02$) | DVFB w/o MAP ($\\\\eta = 0.1$) | DVFB w/o MAP ($\\\\eta = 0.5$) | DVFB w/o MAP (adaptive $\\\\eta$) | DVFB |\\n|-------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-------|\\n| Stand | 954 $\\\\pm$ 5 | 951 $\\\\pm$ 10 | 961 $\\\\pm$ 8 | 960 $\\\\pm$ 5 | 965 $\\\\pm$ 7 |\\n| Walk | 753 $\\\\pm$ 32 | 765 $\\\\pm$ 18 | 752 $\\\\pm$ 31 | 820 $\\\\pm$ 77 | 908 $\\\\pm$ 21 |\\n| Jump | 819 $\\\\pm$ 28 | 811 $\\\\pm$ 16 | 784 $\\\\pm$ 59 | 830 $\\\\pm$ 12 | 831 $\\\\pm$ 20 |\\n| Run | 496 $\\\\pm$ 23 | 491 $\\\\pm$ 4 | 490 $\\\\pm$ 14 | 496 $\\\\pm$ 48 | 536 $\\\\pm$ 27 |\\n| **Average** | 756 | 755 | 747 | 777 | **804** |\\n\\n\\n**Justification for the Reward Mapping Technique.** \\nThe reward mapping mechanism in DVFB helps balance the prior skill value and downstream task value, enabling stable and effective fine-tuning. Our experiments demonstrate that this approach significantly outperforms alternatives.\"}" ] }
0QkVAxJ5iZ
FacLens: Transferable Probe for Foreseeing Non-Factuality in Large Language Models
[ "Yanling Wang", "Haoyang Li", "Hao Zou", "Jing Zhang", "Xinlei He", "Qi Li", "Ke Xu" ]
Despite advancements in large language models (LLMs), non-factual responses remain prevalent. Unlike extensive studies on post-hoc detection of such responses, this work studies non-factuality prediction (NFP), aiming to predict whether an LLM will generate a non-factual response to a question before the generation process. Previous efforts on NFP have demonstrated LLMs' awareness of their internal knowledge, but they still face challenges in efficiency and transferability. In this work, we propose a lightweight NFP model named Factuality Lens (FacLens), which effectively probes hidden representations of questions for the NFP task. Besides, we discover that hidden question representations sourced from different LLMs exhibit similar NFP patterns, which enables the transferability of FacLens across LLMs to reduce development costs. Extensive experiments highlight FacLens’s superiority in both effectiveness and efficiency.
[ "Large language models", "hidden question representation", "non-factuality predictor", "transferability" ]
Reject
https://openreview.net/pdf?id=0QkVAxJ5iZ
https://openreview.net/forum?id=0QkVAxJ5iZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yp3YyRgAPG", "xyXvpq60Eq", "vU28vysPRh", "uhLKtorWfe", "sZMF4AnO8x", "lOrpKBZTXK", "cbmQAcvqsr", "cT4mtNXFQs", "cBh2saryoV", "UJOQWRIkxQ", "TOapPqMeTI", "RvUrKWpTAQ", "RFH8pXDrVz", "POUlBy2THX", "M9tp9BxIwm", "KXSSiPiY6x", "DQTEHjrvFa", "Cmn8q1I68Z", "78weEastGI", "2oXfKdr24v", "2hFtM5LR0L" ], "note_type": [ "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734490517592, 1731996871751, 1730713554414, 1732527918645, 1731926508950, 1732503989935, 1732596492621, 1733305127323, 1731919320558, 1732503916734, 1730699721491, 1731918693028, 1732599923813, 1732528485606, 1732503853772, 1730873235309, 1730648962825, 1737523996481, 1732503717592, 1732528030826, 1731921502123 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9638/Area_Chair_sUdK" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Reviewer_yYWm" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Reviewer_yYWm" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Reviewer_GNy3" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Reviewer_3fBW" ], [ "ICLR.cc/2025/Conference/Submission9638/Reviewer_aR8Q" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ], [ "ICLR.cc/2025/Conference/Submission9638/Authors" ] ], "structured_content_str": [ "{\"metareview\": [\"All reviewers agree that the paper provide a novel methodology to \\\"predict\\\" whether an LLM produce a nonfactual response based on the question's hidden representations. However, there are a few short comings that need to be addressed to make this work more complete. In particular,\", \"As provided by one of the reviewers, the scope of the work seem limited to short QAs. However, it is known that hallucination can be an issue for other tasks like summarization or when the answer is more involved than a single answer (which is more common use case of modern LLMs compared to classical Q&A systems). For example, what if the question ask to list all the grand slams won by a particular tennis player? How does this work is applicable there when the answer is almost correct except for a single item? If this work can be adopted to such use cases it becomes more practical.\", \"Related to the first item, how does this work is applicable in chat cases? How does the previous context can change hidden representations and maybe cause this method to have way too many false negatives?\", \"As mentioned by another reviewer, it is not clear how this transferability works. It would be good to check that after different fine-tuning/RLHF on the same pretrained model as well to see what affects that transferability. For example, if you take a model and train it on triviaQA test set, I assume the model should always produce the correct answer, would be good to see what such case tell about the transferability of that method compared to the same pretrained model finetuned without triviaqa test set.\"], \"additional_comments_on_reviewer_discussion\": \"None\"}", "{\"title\": \"Response to Reviewer 3fBW (2/2)\", \"comment\": \"We sincerely thank you again for your detailed review. We hope our responses will address your concerns. We are pleased to engage in further discussion and address any additional questions you may have.\\n\\nSincerely,\\n\\nAll authors\"}", "{\"summary\": \"Unlike studies on post-hoc detection of non-factual responses, this paper studied non-factuality prediction (NFP), that aims to predict whether an LLM will generate a non-factual response to a question before the generation process. While previous efforts on NFP have demonstrated LLMs' awareness of their internal knowledge, they still faced challenges in efficiency and transferability. Thus, this paper proposed a lightweight NFP model, named Factuality Lens (FacLens), which effectively probes hidden representations of questions for the NFP task. Further, this paper discovered that the hidden question representations from different LLMs exhibit similar NFP patterns, which enables the transferability of FacLens across LLMs to reduce development costs. The experiments highlighted FacLens\\u2019s superiority in both effectiveness and efficiency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper tackled an interesting task of non-factuality prediction (NFP), tried to solve the problems of previous work in efficiency and transferability, and proposed a lightweight NFP model, named Factuality Lens (FacLens). The experiments highlighted FacLens\\u2019s superiority in both effectiveness and efficiency.\", \"weaknesses\": \"1. While the observations that the hidden question representations from different LLMs exhibited similar NFP patterns in Sec. 4.2 is interesting, we are eager to know why they happened and why unsupervised domain adaptation is possible in cross-LLM FacLens. It is better to investigate and mention some possible reasons for them, if possible, while the inspiration from the research on human cognition was mentioned in the paper.\\n\\nFurther, I wonder the observations in Sec. 4.2 can be really applicable to other LLMs. Can the authors mention the generalizability of the observations? \\n\\nMore seriously, in Sec. 5, depending on the datasets, the characteristics and the performance of LLMs seem different in Fig. 6. For example, on PQ and EQ, Qwen2 is rather different from the others, that leads to a concern that the assumption is not really correct and the transferability cannot be really valid among the LLMs. \\n\\n2. I have a concern about the method for NFP dataset construction. Following previous work, the authors adopted a method for QA datasets with short answers. However, all current QA datasets are not generally in the category. It is better to show how large the method can cover the current QA datasets and/or to describe how they can cope with QA datasets with longer or more complex answers.\\n\\n3. When the authors construct training data on just an LLM, the selection of the LLM might be important and might affect the performance. So it is better to show which LLM the authors selected as the best for constructing the training data and how they decided it.\\n\\n4. In human evaluations in Sec. 5.3, it is better to have comparisons with some baselines.\", \"questions\": \"1. While the authors criticized the output statement in cases when LLMs cannot provide factual answers in the end of Sec. 2, I could not understand the criticization because the statement is not necessarily a non-factual answer. I hope the authors will clarify the point.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"A Kind Reminder to Check Our Responses\", \"comment\": \"Dear Reviewer yYWm,\\n\\nAs the discussion period nearing its end, we notice that we have not yet received your feedback on our responses. \\n\\nWe tried our best to address your concerns with detailed explanations. We kindly request you to check our responses and consider reassessing our paper. \\n\\nThank you once again for your time and effort, and we look forward to hearing from you.\\n\\nBest regards,\\n\\nAll authors\"}", "{\"title\": \"Response to Reviewer 3fBW (1/2)\", \"comment\": \"Dear reviewer 3fBW,\\n\\nWe sincerely appreciate your effort in reviewing our paper. Here we will carefully provide responses to your concerns.\\n\\n> **1. Motivation Behind FacLens**\\n\\n***- Why not use the MoE architecture:*** Non-MoE dense LLMs remain widely studied and applied. Following previous works (i.e., the baselines), we conducted experiments on non-MoE dense LLMs. Actually, there is no fundamental difference between MoE-based and dense LLMs, as both use the Transformer architecture, share training schemes (pre-training, SFT, RLHF), and overlap in training corpora. Thus, our work has the potential to be applied to MoE-based LLMs, though exploring this is beyond the scope of this paper and will be addressed in future work.\\n\\n***- Proof of the hypothesis that LLMs share similar cognitions:*** The rationality of our work is based on the transferability of FacLens across different LLMs, instead of proving the hypothesis that LLMs share similar cognitions. This hypothesis only serves as a motivating inspiration. In terms of our focus, we have conducted comprehensive experiments in Section 4.2 to validate the transferability of FacLens.\\n\\n> **2. Novelty**\\n\\nOur work is the first to explore an **efficient and transferable** approach for ante-hoc non-factuality prediction (NFP). **Reviewer GNy3 explicitly acknowledged the novelty of our work**, emphasizing that this kind of transferability is importance in the NLP community. Unlike previous works, our work **simultaneously offers several key features**: 1) it is not limited to specific question templates, 2) it achieves promising results with a lightweight architecture, and 3) it supports unsupervised domain adaptation across LLMs for efficient development.\\n\\n> **3. Performance Improvement**\\n\\nOur work has **clear improvements** over existing works in practical applications **(efficiency beyond performance)** due to the following reasons.\\n\\n***- Superiority of the ante-hoc (NFP) method FacLens over post-hoc (NFD) methods:*** In Figure 2, we compare the ante-hoc method (FacLens) with post-hoc methods (SAPLMA and INSIDE). Unlike post-hoc methods, which rely on costly answer generation, the ante-hoc method **avoids inference costs** and **controls risks in advance**. As shown in Figure 2, **despite post-hoc methods having more information (i.e., generated answers), FacLens still performs better.**\\n\\n***- Improvement compared to NFP baselines:*** Table 1 shows that FacLens achieves clear performance gains over most baselines. While the performance gains over LoRA and Self-Evaluation are slightly smaller, FacLens significantly outperforms both baselines in terms of training **efficiency** (see Table 2), which is a crucial factor for practical application.\\n\\n> **4. Baselines**\\n\\nNFD models (post-hoc) are fundamentally different from NFP models (ante-hoc) in their settings (see the 3rd response). Therefore, comparing FacLens with NFD models in Table 1 is not necessary. To highlight the advantages and potential of the ante-hoc approach, we have compared FacLens with typical NFD methods in Figure 2.\\n\\nFor the NFP baselines, we tried to systematically investigate the related work and compared FacLens with existing NFP baselines. We are open to include more works if you have any suggestions.\\n\\n\\n> **5. Experimental Settings**\\n\\n***- Form of answers:*** The factual QA scenario naturally favors short answers, as it focuses on querying specific real-world facts, such as names, numbers, birthplaces, dates, and so on. Additionally, most mainstream factual QA datasets adopt short answers. For these reasons, and to ensure a fair comparison, this paper follows our baselines to focus on common factual QA with typically short answers.\\n\\nWhile general QA with long-form answers (e.g., text summarization or literary creation) may include factual errors in generated responses, addressing factual errors in this context is another problem requiring a separate methodology, which presents a promising direction for future research. \\n\\n***- NFP baselines:*** We have done our best to investigate related work and compared with existing NFP baselines. They are not all naive methods, **including advanced baselines** using model internals **such as SAT Probe [1] published on ICLR 2024**. \\n\\n[1] Mert Y\\u00fcksekg\\u00f6n\\u00fcl, et al. Attention satisfies: A constraint-satisfaction lens on factual errors of language models. In ICLR, 2024.\\n\\n\\n> **6. New Findings**\\n\\nWhile existing studies have shown that hidden representations in middle layers have good properties for other tasks, our work is the first to uncover that hidden question representations in middle layers contribute to the NFP task.\\n\\nBesides, this paper has several useful findings. For example, (1) hidden question representations from different LLMs share transferable NFP patterns (**approved by other reviewers**), and (2) ante-hoc method potentially perform better than the post-hoc method.\\n\\n> **Q1: Typos**\\n\\nWe will proofread the paper again and fix the typos in the paper.\"}", "{\"title\": \"Kind Request for Feedback\", \"comment\": \"Dear Reviewer aR8Q,\\n\\nThank you very much for taking the time to review our paper. We sincerely appreciate your valuable comments and have provided detailed responses to address your concerns.\\n\\nWe would greatly appreciate it if you could let us know whether our responses have addressed your concerns. If there are any remaining questions, we would be more than happy to provide further clarification and work towards resolving them.\\n\\nThank you once again for your time, effort, and consideration, and we look forward to your valuable feedback.\\n\\nBest regards,\\n\\nAll Authors\"}", "{\"comment\": \"Thanks for the detailed response. I decide to keep my rating unchanged.\"}", "{\"title\": \"Summary of Key Advantages of This Work and Responses to Reviewers and Chairs\", \"comment\": \"Dear Reviewers and Chairs,\\n\\nWe sincerely appreciate the time and effort you have invested in reviewing our paper. We would like to summarize the key advantages of our work, which we hope will help you reassess it.\\n\\n- **Ante-hoc Risk Control and Superior Performance**: While post-hoc methods leverage more information (i.e., the LLM-generated answers), FacLens achieves comparable and even better performance (see Figure 2). This demonstrates that FacLens can not only avoid inference costs but also proactively control risks associated with non-factual responses.\\n\\n- **Pioneering Transferability in NFP (Acknowledged by All Reviewers)**: FacLens is the first work to effectively validate transferability in Non-Factuality Prediction (NFP), leveraging unsupervised domain adaptation to adapt to different LLMs without requiring new labeled data for each LLM. This innovative approach significantly improves efficiency.\\n\\n- **Lightweight and Effective**: FacLens is a streamlined model that outperforms previous NFP models in both effectiveness and efficiency, making it suitable for practical applications.\\n\\n- **Well-Thought Experiments (Supported by Reviewer GNy3)**: The experiments are carefully designed and comprehensive, showcasing FacLens's effectiveness and efficiency across multiple benchmarks. \\n\\n**We particularly appreciate Reviewer GNy3's acknowledgment that the domain adaptation of NFP models across LLMs is an important area of NLP research and should be encouraged across the community.**\\n\\nIn response to the reviewers' concerns, we have provided detailed clarifications and additional experiments. While we did not receive further detailed replies, we hope that the concerns have been adequately addressed. The instructive comments in the reviews can help us better improve this work. In response to Reviewer aR8Q\\u2019s comment, we conducted additional experiments on HotpotQA and TriviaQA, and the results continue to demonstrate the effectiveness of our approach.\\n\\nThank you once again for your efforts on reviewing this work. We hope our final response addresses your concerns effectively.\\n\\nBest regards,\\n\\nAll authors\"}", "{\"title\": \"Response to Reviewer GNy3\", \"comment\": \"Dear Reviewer GNy3,\\n\\nWe really appreciate your recognition of our paper as novel, important, and well-written. We are delighted that you consider our work an important contribution to the field of NLP and encourage its broader dissemination within the community. \\n\\nYour constructive suggestions will help us improve this paper. Here we will carefully address your questions in detail.\\n\\n> **1. Potential Use of LLMs with Larger Scales**\\n\\nThank you very much for this suggestion! We acknowledge the potential of using LLMs with larger scales (e.g., over 50B) and plan to explore this direction as the future work. In practice, for many practitioners, training and deploying much larger LLMs presents substantial resource challenges, so they may still opt for LLMs under 10B. Our current work offers practical insights for them. \\n\\n> **2. Experimental Setup for the Domain Adaptation (DA)**\\n\\nTable 1 does not include the DA results. The reason is that existing baselines do not investigate and implement DA for cross-LLM Non-Factuality Prediction (NFP). \\n\\nIn the DA setting, we have no labeled data for the target LLM. For a fair comparison, we compare the domain-adapted FacLens with unsupervised NFP baselines, including PPL, Prompting, Entity-Popularity, and Self-Familiarity. We summarize the results in the following table, where the domain-adapted FacLens demonstrates promising performance compared to the unsupervised NFP baselines. We appreciate your suggestion and will rephrase the related experimental setup.\\n\\n**Comparison Between Domain-Adapted FacLens with Unsupervised NFP Baselines on LLaMA3 (the same trend can be found on other LLMs)**\\n\\n| Method | PQ | EQ | NQ |\\n|-----------------------------------------|-------------|-------------|-------------|\\n| PPL | 69.8 | 65.5 | 53.9 |\\n| Prompting |70.6 | 64.9 | 57.2 |\\n| Entity-Popularity | 75.9 | - | - |\\n| Self-Familiarity |61.8 | 68.4 | 52.0 |\\n| Domain-Adapted FacLens (Qwen2 -> LLaMA3) | 75.7 | 75.3 | 63.3|\\n| Domain-Adapted FacLens (LLaMA2 -> LLaMA3) | 84.2 | 81.4 | 65.5 |\\n| Domain-Adapted FacLens (Mistral -> LLaMA3) | 85.4 | 83.0 | 64.0 |\\n|FacLens (w/o DA, trained with labeled data) | 86.5 | 85.0 | 68.9 |\\n\\n\\n---\\n\\nWe sincerely thank you again for your thoughtful review and positive feedback. Your recognition is a great encouragement to us. We would be happy to engage in further discussion and address any additional questions you may have.\\n\\nSincerely,\\n\\nAll authors\"}", "{\"title\": \"Kind Request for Feedback\", \"comment\": \"Dear Reviewer GNy3,\\n\\nThank you very much for taking the time to review our paper. We sincerely appreciate your valuable comments and have provided detailed responses to address your concerns.\\n\\nWe would greatly appreciate it if you could let us know whether our responses have addressed your concerns. If there are any remaining questions, we would be more than happy to provide further clarification and work towards resolving them.\\n\\nThank you once again for your time, effort, and consideration, and we look forward to your valuable feedback.\\n\\nBest regards,\\n\\nAll Authors\"}", "{\"summary\": \"This work introduces FactLens, a NFP model which, unlike previous NFP methods, exhibits transferability across different LLMs. The authors make the following major contributions through the introduction of FactLens:\\n1. Show clear evidence of the importance of hidden layers in the pre-hoc/NFP setting.\\n2. Introduce a novel architecture for Factlens, such that the Factlens model weights can be adapted to a new LLM for good performance on the NFP task. \\n3. Conduct experiments to show superior performance in comparison to both post-hoc models, as well as similar NFP models. Factlens is also considerably faster than other similar models.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper solves a series of important problems, notably the transferability in the NFP domain. This kind of domain adaptation is an important area of NLP research and should be encouraged across the community.\", \"The experiments are very well thought, extremely detailed, and the paper is overall pretty decently written.\"], \"weaknesses\": [\"Certain questions that seem to remain open:\", \"The size of LLMs used for training seems small. While this isn\\u2019t a major concern, it would be good to understand how FactLens does with larger models (say size > 50B)\", \"It\\u2019s not clear whether Domain Adaptation is used for the results in Table 1. If no, how does the domain-adapted Factlens do in comparison to other NFP baseline?. In general, the authors should clarify which of the results in the paper use DA for FactLens. This can be added in the experimental setup.\"], \"questions\": \"Nothing major, see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer aR8Q\", \"comment\": \"Dear Reviewer aR8Q,\\n\\nWe sincerely appreciate your effort in reviewing our paper. We are glad that you found it clear and recognized its excellent efficiency and transferability. Here we will carefully address your concerns and questions in detail.\\n\\n> **1. Performance Improvement**\\n\\n***- Clarification on Figure 3:*** There may have been a misunderstanding regarding the meaning conveyed by Figure 3. The similar performance of $f_m$ and $f_{mix}$ is, in fact, a **positive outcome**. Both $f_m$ and $f_{mix}$ represent our FacLens model, differing only in training data. Our goal in comparing $f_m$ and $f_{mix}$ is to demonstrate FacLens\\u2019s transferability across LLMs. The similarity in their performance supports FacLens\\u2019s transferability across LLMs.\\n\\n***- Clarification on Table 1:*** Table 1 shows that FacLens achieves clear performance gains over most baselines. While the performance gains over LoRA and Self-Evaluation are slightly smaller, FacLens significantly outperforms both of them in terms of training **efficiency** (see Table 2), which is crucial for practical applications. Moreover, as shown in Figure 2, we compared FacLens with post-hoc methods. Despite post-hoc methods having access to additional information (i.e., the generated answers), FacLens still performs better.\\n\\n> **2. Experimental Datasets**\\n\\nWe adopted widely recognized datasets frequently used in the factual QA research, including the Natural Questions dataset, which is sourced from real user queries. Furthermore, we conducted human evaluation to collect real queries to further provide a practical assessment.\\n\\nThank you very much for the valuable suggestion on incorporating more QA datasets! We supplemented main experiments with TriviaQA and HotpotQA, which are more complex QA datasets involving reasoning. The results demonstrate that our FacLens continues to perform well, and the domain adaptation is still effective.\\n\\n**Comparison between FacLens and NFP baselines on TriviaQA and HotpotQA**\\n| Method | TriviaQA (LLaMA2) | HotpotQA (LLaMA2) | TriviaQA (LLaMA3) | HotpotQA (LLaMA3) |TriviaQA (Mistral) | HotpotQA (Mistral) |TriviaQA (Qwen2) | HotpotQA (Qwen2) |\\n|--------------------------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|\\n| PPL |58.4 | 55.2 | 54.6 | 55.2 |54.9 |54.4 |52.7 |53.5|\\n| Prompting | 65.9 | 62.5 | 69.3 | 61.0 |73.8|63.0 | 57.6 | 62.1|\\n| Entity-Popularity | - | - | - | - | - | - | - | - |\\n| SAT Probe | - | - | - | - | - | - | - | - |\\n| Self-Familiarity | 61.9 | 55.3 | 57.3 | 56.8 |59.3 | 54.5 |62.1 | 53.7 |\\n| LoRA (Parameter-Efficient FT) | 71.7 | 72.9 | 51.6 | 68.0| 64.1 | 66.9 | 59.4 | 70.8 |\\n| Self-Evaluation (Fully FT) |70.8 | 75.0 | 63.3 | 69.3 | 62.0 | 71.1 | 62.4 | 72.1 |\\n| FacLens (last token, middle layer) | 74.2 | 75.5 | 67.6 | 66.9 |69.2 | 74.9 | 70.3 | 71.4|\\n| FacLens (last token, last layer) | 75.0 | 74.3 | 65.1 | 68.6 | 67.7 | 74.1 | 70.0 | 72.1 |\\n| FacLens (last token, 2nd to last layer) | 74.2 | 74.7 | 69.0 | 68.3 | 70.9 | 74.7 | 71.3 | 72.7|\\n\\n*Note: \\\"-\\\" indicates that the method is not applicable to the dataset. Compared to the most competitive baselines (i.e., LoRA and Self-Evaluation), FacLens demonstrates a significant improvement in training efficiency (see Table 2 in our paper).*\\n\\n**Evaluation of domain adaptation for cross-LLM FacLens on TriviaQA and HotpotQA**\\n| Method | w/o DA | w/ DA |\\n|---------------------|-------------------|------------------|\\n| LLaMA2 -> LLaMA3 (TriviaQA) | 44.7 | 64.5 |\\n| LLaMA3 -> LLaMA2 (TriviaQA) | 57.3 | 70.5 | \\n| LLaMA2 -> LLaMA3 (HotpotQA) | 46.2 | 67.3 |\\n| LLaMA3 -> LLaMA2 (HotpotQA) | 38.7 | 70.9 |\\n\\nThrough supplementing the suggested experiments, we found **another interesting observation**. We observed that for the more complex factual QA tasks (TriviaQA and HotpotQA), FacLens prefers hidden question representations from the later layers. This may be because LLMs tend to engage in deeper thinking when facing complex factual questions. We will incorporate this interesting observation to the next version of our paper. \\n\\n---\\n\\nWe sincerely thank you again for the constructive comments and the recognition of the strengths of our work. We would be delighted to engage in further discussion and address any additional questions you may have.\\n\\nSincerely,\\n\\nAll authors\"}", "{\"title\": \"Request for Clarification on Remaining Concerns\", \"comment\": \"Dear Reviewer yYWm,\\n\\nThank you for your feedback. We noticed that you have decided to keep your rating unchanged. We would greatly appreciate it if you could clarify which concerns remain unresolved.\\n\\nYour insights are crucial for helping us improve our work, and we are more than willing to engage in further discussions with you.\\n\\nLooking forward to hearing from you.\\n\\nBest regards,\\n\\nAll authors\"}", "{\"title\": \"A Kind Reminder to Check Our Responses\", \"comment\": \"Dear Reviewer 3fBW,\\n\\nAs the discussion period nearing its end, we notice that we have not yet received your feedback on our responses.\\n\\nWe tried our best to address your concerns with detailed explanations. We kindly request you to check our responses and consider reassessing our paper.\\n\\nThank you once again for your time and effort, and we look forward to hearing from you.\\n\\nBest regards,\\n\\nAll authors\"}", "{\"title\": \"Kind Request for Feedback\", \"comment\": \"Dear Reviewer yYWm,\\n\\nThank you very much for taking the time to review our paper. We sincerely appreciate your valuable comments and have provided detailed responses to address your concerns.\\n\\nWe would greatly appreciate it if you could let us know whether our responses have addressed your concerns. If there are any remaining questions, we would be more than happy to provide further clarification and work towards resolving them.\\n\\nThank you once again for your time, effort, and consideration, and we look forward to your valuable feedback.\\n\\nBest regards,\\n\\nAll Authors\"}", "{\"summary\": \"The paper introduces a model method (FacLens) to predict the likelihood of language models (LMs) generating non-factual responses before generation occurs, a task called non-factuality prediction (NFP). This work claims that, unlike traditional non-factuality detection (NFD) methods that probe response representations, FacLens probes the question's hidden representations to make non-factuality predictions. FacLens can be adapted to different LMs by leveraging unsupervised domain adaptation techniques, which reduces the resource-intensive need to generate new labeled data for each model. The authors conduct experiments across four models and three datasets to demonstrate FacLens's superior performance and efficiency compared to baseline methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. FacLens can be adapted to different LMs by leveraging unsupervised domain adaptation techniques, which reduces the resource-intensive need to generate new labeled data for each model.\\n2. The authors present a shift from traditional non-factuality detection (NFD) to non-factuality prediction (NFP). They show that models are internally aware of whether they can accurately answer a question before generation.\", \"weaknesses\": \"1. The authors claim that different LLMs share similar cognitive patterns in terms of knowledge awareness, as they rely on transformer-based architectures. However, not all LMs use the same architecture; for instance, recent MoE architectures, which have gained significant popularity, replace feed-forward networks with MoE modules. It is essential to study MoE-based models to examine if this claim holds. Additionally, the proof of this hypothesis is unclear and not convincing and needs further support.\\n2. Apart from the domain adaptation techniques, FacLens\\u2019s development heavily relies on previous work and lacks substantial novelty.\\n3. The overall performance gain compared to baselines, particularly SAPLMA, is marginal, and so there is no compelling evidence that probing question hidden representations leads to better non-factuality prediction.\\n4. In the main experiments (Table 1), NFD baselines are excluded, and only a selected set of methods categorized under NFP are reported.\\n5. The experiments do not represent a practical LM generation setting, as they are limited to a set of short-form QA datasets. While the authors define the NFP task, they compare it with naive baselines, such as entity popularity, and do not consider more sophisticated methods developed for factuality improvement using model internals.\\n6. Some findings, such as LLMs generally recognizing \\u201cwhether they know\\u201d in their middle layers, have been previously reported and are not new findings.\\n\\nOverall, this paper lacks significant contributions, and the limited experimental setup and marginal performance gains make it challenging to claim that the proposed method is more effective than its existing counterparts.\", \"questions\": \"Please refer to the weaknesses for clarification. Also, the paper has multiple typos that can be addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces FactLens, a probing method designed to predict whether a large language model (LLM) is likely to provide factual responses to a given question. Additionally, the authors demonstrate that FactLens can be effectively transferred across different models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The method is clear and straightforward. FactLens is a simple probing method to assess question factuality, and its streamlined structure makes it easy to adopt.\\n2. Excellent efficiency and transferability. The experiments demonstrate that FactLens can be effectively transferred to other models, performing well across various benchmarks, including PopQA, Entity Questions, and Natural Questions.\", \"weaknesses\": \"1. The primary weakness is that FactLens does not show a clear performance improvement over previous methods. Both Figure 3 and Table 1 indicate that FactLens performs comparably to, but not significantly better than, prior approaches.\\n2. The experiment lacks a wider range of benchmarks. Adding more datasets, such as TriviaQA [1] and HotpotQA [2], could provide a more comprehensive evaluation.\\n\\n[1] TriviaQA: A large-scale distantly supervised challenge dataset for reading comprehension. Joshi et al., 2017. \\n[2] HotpotQA: A dataset for diverse, explainable multi-hop question answering. Yang et al., 2018.\", \"questions\": \"Refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Kind Request for Feedback\", \"comment\": \"Dear Reviewer 3fBW,\\n\\nThank you very much for taking the time to review our paper. We sincerely appreciate your valuable comments and have provided detailed responses to address your concerns.\\n\\nWe would greatly appreciate it if you could let us know whether our responses have addressed your concerns. If there are any remaining questions, we would be more than happy to provide further clarification and work towards resolving them.\\n\\nThank you once again for your time, effort, and consideration, and we look forward to your valuable feedback.\\n\\nBest regards,\\n\\nAll Authors\"}", "{\"title\": \"A Kind Reminder to Check Our Responses\", \"comment\": \"Dear Reviewer aR8Q,\\n\\nWith the discussion period nearing its end, we notice that we have not yet received your feedback on our responses.\\n\\nWe tried our best to address your concerns, providing detailed explanations and additional experimental results to clarify your concerns. We kindly request you to check our responses and reconsider your assessment of our paper.\\n\\nThank you once again for your time and effort, and we look forward to hearing from you.\\n\\nBest regards,\\n\\nAll authors\"}", "{\"title\": \"Response to Reviewer yYWm\", \"comment\": \"Dear reviewer yYWm,\\n\\nWe sincerely appreciate your constructive review and are delighted that you found our work interesting. Below, we will provide detailed responses to the questions raised.\\n\\n> **1. Explanation About the Tansferability of FacLens**\\n\\n***- How to Explain Observations in Section 4.2***\\n\\nCurrent LLMs commonly share the similar architecture (i.e., Transformer), training scheme (i.e., pre-training, SFT, and RLHF), and overlapping knowledge resource (i.e., training corpora), e.g., LLMs such as Qwen and LLaMA, use a mix of data from publicly available sources. This suggests that LLMs are likely to have similar cognition styles.\\n\\nIn human cognition, individuals with similar cognitive styles show similar brain activity when performing the same task. This inspires us to explain why hidden question representations from different LLMs (i.e., LLMs' brain activation during question-thinking) exhibit similar NFP patterns.\\n\\n***- Why Unsupervised Domain Adaptation (DA) is Effective for Cross-LLM FacLens***\\n\\nThe reason was explained in Section 4.2. In specific, we outlined the premise of unsupervised DA. Then we designed experiments (see Figures 3, 4, and 5) to demonstrate that cross-LLM FacLens satisfies this premise.\\n\\n***- Generalization of the Observations in Section 4.2***\\n\\nIn Section 4.2, we conducted experiments on various LLMs, including LLaMA2, LLaMA3, Mistral, and Qwen2. The results across different LLMs lead to the same observation (see Figures 3, 4, and 5), demonstrating the generalizability of our observations. For example, in each subfigure of Figure 5, samples from different LLMs form clusters and share the same classification boundary for NFP.\\n\\n***- Correctness of the Transferability***\\n\\nThe transferability of FacLens is validated by comparing results without and with DA, as shown in the upper and lower of Figure 6. Without DA (upper), FacLens trained on one LLM performs poorly on another, especially Qwen2, which differs from the other three LLMs in hidden dimensions and scale. With DA (lower), the performance of FacLens on the target LLM, including Qwen2, improves significantly.\\n\\nNote that Qwen2 used in this paper differs in scale from the other LLMs. As mentioned on page 10, while FacLens shows transferability between any two LLMs, better transferability is observed among LLMs of similar scales. This does not affect the validity of the demonstrated transferability. Future work will explore methods to enhance FacLens\\u2019s transferability across LLMs of different scales.\\n\\n> **2. QA Datasets with Long Answers**\\n\\nThe factual QA scenario naturally favors short answers, as it focuses on querying specific real-world facts, such as names, numbers, birthplaces, dates, occupations, and so on. Additionally, most mainstream factual QA datasets adopt short answers. For these reasons, like previous works [1, 2, 3], this paper focuses on common factual QA, which typically involves short answers. \\n\\nWhile general QA with long-form answers (e.g., text summarization or literary creation) may include factual errors in generated responses, addressing factual errors in this context is another problem requiring a separate methodology, which presents a promising direction for future research.\\n\\n> **3. Selection of LLM for Constructing Labeled Training Data**\\n\\nThe choice of LLM for constructing labeled training data is flexible. Our goal is to explore whether a FacLens trained for one LLM can quickly adapt to another. The unsupervised DA of cross-LLM FacLens has shown to be effective across any LLM pair (see Figure 6). When a specific LLM can be selected, based on our experiments, we recommend transferring between LLMs of similar scales or from larger to smaller-scale LLMs.\\n\\n> **4. Including Baselines in Human Evaluation**\\n\\nThank you very much for the helpful suggestion! We will certainly integrate the NFP baselines into our demo to facilitate more comprehensive human evaluation.\\n\\n> **Q: Clarification on the End of Section 2**\\n\\nIn Section 3.1, we defined that if an LLM-generated answer fails to convey the queried fact, it is a non-factual response. Therefore, the output statement \\u201cI apologize, but I don\\u2019t have information on ...\\u201d should be considered as non-factual. Our intent is not to criticize this statement but to clarify that they do not convey the queried fact. We will better claim this in the next version.\\n\\n[1] Alex Mallen, et al. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In ACL, pp. 9802\\u20139822, 2023.\\n\\n[2] Mert Y\\u00fcksekg\\u00f6n\\u00fcl, et al. Attention satisfies: A constraint-satisfaction lens on factual errors of language models. In ICLR, 2024.\\n\\n[3] Saurav Kadavath, et al. Language models (mostly) know what they know. CoRR, abs/2207.05221, 2022.\\n\\n---\\n\\nWe sincerely thank you again for your detailed review and valuable suggestions. We would be happy to engage in further discussion and address any additional questions you may have.\\n\\nSincerely,\\n\\nAll authors\"}" ] }
0QePvFoqY6
IncEventGS: Pose-Free Gaussian Splatting from a Single Event Camera
[ "Jian Huang", "Chengrui Dong", "Peidong Liu" ]
Implicit neural representation and explicit 3D Gaussian Splatting (3D-GS) for novel view synthesis have achieved remarkable progress with frame-based camera (e.g. RGB and RGB-D cameras) recently. Compared to frame-based camera, a novel type of bio-inspired visual sensor, i.e. event camera, has demonstrated advantages in high temporal resolution, high dynamic range, low power consumption and low latency. Due to its unique asynchronous and irregular data capturing process, limited work has been proposed to apply neural representation or 3D Gaussian splatting for an event camera. In this work, we present IncEventGS, an incremental 3D Gaussian Splatting reconstruction algorithm with a single event camera. To recover the 3D scene representation incrementally, we exploit the tracking and mapping paradigm of conventional SLAM pipelines for IncEventGS. Given the incoming event stream, the tracker firstly estimates an initial camera motion based on prior reconstructed 3D-GS scene representation. The mapper then jointly refines both the 3D scene representation and camera motion based on the previously estimated motion trajectory from the tracker. The experimental results demonstrate that IncEventGS delivers superior performance compared to prior NeRF-based methods and other related baselines, even we do not have the ground-truth camera poses. Furthermore, our method can also deliver better performance compared to state-of- the-art event visual odometry methods in terms of camera motion estimation.
[ "3D Gaussian", "Event Camera" ]
https://openreview.net/pdf?id=0QePvFoqY6
https://openreview.net/forum?id=0QePvFoqY6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sirug4K9S7", "YHgzEfXYfO", "HMoRKiswMN", "FEAOGvXHfP", "Aw2ePgxZ6o" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730692824791, 1730613887489, 1730553260518, 1729018928951, 1731645046614 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3822/Reviewer_X8Zm" ], [ "ICLR.cc/2025/Conference/Submission3822/Reviewer_UfLZ" ], [ "ICLR.cc/2025/Conference/Submission3822/Reviewer_gT1i" ], [ "ICLR.cc/2025/Conference/Submission3822/Reviewer_xHif" ], [ "ICLR.cc/2025/Conference/Submission3822/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes IncEventGS, an incremental dense 3D reconstruction method using a single event camera. To incrementally recover the 3D scene, IncEventGS leverages the tracking and mapping approach of traditional SLAM. The tracker first estimates initial camera motion from prior 3DGS reconstructions, while the mapper refines both the 3D scene and camera motion using the tracker\\u2019s motion trajectory estimates. The advantage of IncEventGS does not require any ground truth camera poses. The results show that IncEventGS outperforms prior NeRF-based methods and related baselines, even without ground-truth camera poses. Additionally, it surpasses SOTA event-based VO methods in camera motion estimation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The topic of event-based 3D reconstruction without camera pose is very interesting topic.\\n\\n2. The authors conducted extensive experiments demonstrating that IncEventGS outperforms previous NeRF-based methods and other baselines, even without ground-truth camera poses.\\n\\n3. The writing is clear and easy to understand.\", \"weaknesses\": \"1. While I acknowledge that this paper is the first to explore 3D reconstruction using a single event camera combined with 3D ground segmentation without camera poses, its novelty appears to be limited. There are existing works using traditional RGB cameras for 3D reconstruction without relying on camera poses, and the approach of directly accumulating events into event images does not clearly highlight significant contributions to the field, whether from the image-based 3D ground segmentation community or the event-based community. I encourage the authors to articulate the specific technical contributions of this work.\\n\\n2. I recommend that the authors include more examples of extreme scenarios, such as high-speed motion and low-light conditions, alongside comparisons with RGB images. This could better demonstrate the advantages of using event cameras for 3D reconstruction.\\n\\n3. Regarding the possibility of achieving colored 3D reconstruction, can this method be applied? Since there are existing color event cameras, could the authors obtain data from such cameras to create an example of colored reconstruction?\\n\\n4. The writing could be further improved in several ways: a) The title in line 97 should be bolded and capitalized. b) Section 3.2 does not require an extensive explanation of event camera principles and image accumulation. c) The font sizes in Tables 1 and 2 should be made consistent.\", \"questions\": \"Please see the weaknesses. I have assigned a preliminary score based on the initial manuscript, but I may adjust this score based on the authors' responses and feedback from other reviewers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"1) This manuscript presents a method in which a 3D scene is reconstructed using a single event camera and 3D-GS. The authors describe a process where the 3D scene reconstruction does not require provided camera poses. The 3D-GS parameters and camera poses are simultaneously calculated, using a concept similar to SLAM, but generating a dense 3D scene.\\n\\n2) The presented method produces results that outperform the current state-of-the-art by a significant margin.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1) An original concept in which 3D-GS and camera poses are optimized simultaneously.\\n\\n2) Results that surpass the current state-of-the-art.\", \"weaknesses\": \"The manuscript is clearly written but does not explain in a precise and in-depth manner how it is carried out. In other words, the concepts expressed are only shown at a high level, without delving into small key details, such as the \\u201ccontinuous time trajectory parameterization\\u201d or how \\u201cthe camera poses (T_k) and (T_{k+\\\\Delta t}) can be interpolated,\\u201d and how exactly to \\u201crender two grayscale images (i.e., (\\\\hat{I}k) and (\\\\hat{I}{k+\\\\Delta t})) from the previously recovered 3D-GS,\\u201d which makes it very difficult to reproduce the results.\\n\\nAlthough the manuscript mentions some studies related to 3D-GS and event cameras, it does not mention 3D-GS works that perform 3D reconstruction or novel view synthesis with pose-free cameras and frame-based cameras.\", \"questions\": \"1) Although this type of work is new in the area of event cameras, why is there no mention of other pose-free camera work in the field of frame-based cameras?\\n\\n2) Since the document only expresses high-level ideas, are there any plans to make the code publicly available in the future?\\n\\n3) Why are there no supplementary videos supporting the results shown in the manuscript?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"There are no concerns regarding ethics\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes IncEventGS, which is an incremental 3D Gaussian Splatting reconstruction algorithm with a single event camera. IncEventGS employed the tracking and mapping paradigm of conventional SLAM pipelines.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The results of IncEventGS shown in Tabs 1 and 2 are amazing and effective.\\n\\n2. This paper is written in an easy-to-understand manner.\", \"weaknesses\": \"1. The motivation of this paper is weak, as the paper claims, \\\"Due to its unique asynchronous and irregular data capturing process, limited work has been proposed to apply neural representation or 3D Gaussian splatting for an event camera\\\". I think the authors should discover the reasons behind it rather than the superficial phenomenon.\\n\\n2. The title is \\\"Pose-free.\\\" Why the author did this is not explained. I think that although no pose ground truth is provided, using conventional slam pipelines actually provides this variable implicitly. Conventional slam pipelines will be more robust than pose estimators, which use deep learning methods.\\n\\n3. This paper mentions several times \\u201cdue to its time continuous, sparse, asynchronous and irregular data capturing characteristics.\\u201d I don't think the authors have solved this problem; they are still taking the approach of stacking events into the frame.\\n\\n4. In line 62, citation duplication.\\n\\n5. The contribution needs to be rewritten, which is just like changing the representation from Nerf to GS. However, this work has already been done.\\n\\n6. \\\"The main insight of IncEventGS is to accumulate incoming event data into chunks and treat each chunk as a special \\\"image\\\". This is not a contribution and does not need to be emphasized.\\n\\n7. In line 216 and 307, C in (3) and equation 6.\", \"questions\": \"1. Why did IncEventGS stop using the ground-truth pose after adopting Gaussian Splatting representations compared to Nerf-based representations?\\n\\n2. As we know, 3DGS hardware friendliness is superior to Nerf-based representations, and I'm curious about the overall runtime of the system compared to Nerf-based.\\n\\n3. More experiments need to be done, such as Tanks and Temples.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The present paper proposes a novel view synthesis approach that reconstructs 3D scenes from event camera streams without precise camera poses. The main goal is to handle the unique asynchronous and irregular data characteristics of event cameras, which pose significant challenges for traditional 3D reconstruction methods. By utilizing a tracking and mapping approach inspired by SLAM pipelines, the method estimates camera motion based on prior reconstructed 3D-GS scenes and incrementally refines both the camera motion and scene representation. It's capable of handling real-world scenarios with no gt. poses, offering improved performance compared to NeRF-based methods and event-based visual odometry. It efficiently renders high-quality brightness images, outperforming baseline methods in terms of novel view synthesis and motion estimation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Overall, this paper presents an incremental 3D Gaussian Splatting reconstruction algorithm from a single event camera, without requiring the ground truth camera poses. It has a motivation and is adequate for the audience and also solid on the technical side and adds the event-based VO tricks and off-the-shelf depth model for re-initialization to 3DGS. Thus, this work is interesting for readers working at the intersection of novel view synthesis and neuromorphic vision.\", \"weaknesses\": \"1. The setup of the real-world experiments lacks validity. Specifically, for the evaluation on the TUM-VIE dataset, the qualitative results alone are insufficient. The authors should also include quantitative analysis using no-reference image quality assessment metrics to provide a more comprehensive evaluation.\\n\\n2. Although the proposed method can operate using an event stream as input to reconstruct 3D Gaussians, it still relies on uniform event stream as input. The proposed method is, therefore, limited by the density of event data streams, which restricts its practical applications. \\n\\n3. Despite the detailed comparison of the quality of rendered images, the efficiency of the training and rendering process is not included, which is an important metric of NVS methods. Extra comparisons with other methods on training time and inference FPS would help better evaluate the proposed method.\\n\\n4. This method is valuable for addressing event-based visual odometry. However, the authors focus more on the NVS task, and using Gaussian functions to reconstruct grayscale scenes seems less relevant, as they are mainly suited for head-mounted devices, which reduces the method\\u2019s rationale.\", \"beyond_this_i_have_mainly_minor_comments_and_nitpicks\": \"l.117, the sentence contains a grammatical error and should be revised. Specifically, \\\"IncEventGS conduct...\\\" should be corrected to \\\"IncEventGS conducts...\\\".\\n\\nl.142, the expression should be standardized by changing \\\"se3\\\" to \\\"se(3)\\\" for clarity and consistency.\\n\\nl.162~186, I think the re-initialization process is vital to the method, but the main figure of the pipeline does not reflect this which may generate some confusion with readers not familiar with the method.\", \"questions\": \"1. The re-initialization using the pre-trained depth model for regularization with the proposed Gaussian model is not clear. Can the authors provide more details about it? Especially regarding the visualization of the intermediate process.\\n\\n2. For SLAM or VIO, the accuracy of the trajectory is crucial. However, for NVS (Novel View Synthesis) tasks, the proposed method merely reconstructing a gray map of the scene can diminish the significance of the task to some extent. It is not enough to work only on the gray map. Could we perform the NVS task on the RGB event dataset? For example, the dataset from [1] or [2] or the event-based color synthetic Replica dataset.\\n\\n3. What's more, I noticed that the authors did not provide any supplementary materials. Could the authors provide some visual demos to better observe the overall effect of this method?\\n\\n[1] Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion\\n\\n[2] EventNeRF: Neural Radiance Fields from a Single Color Event Camera\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
0QZcoGdmtJ
Auditing $f$-Differential Privacy in One Run
[ "Saeed Mahloujifar", "Luca Melis", "Kamalika Chaudhuri" ]
Empirical auditing has emerged as a means of catching some of the flaws in the implementation of privacy-preserving algorithms. Existing auditing mechanisms, however, are either computationally inefficient -- requiring multiple runs of the machine learning algorithms —- or suboptimal in calculating an empirical privacy. In this work, we present a tight and efficient auditing procedure and analysis that can effectively assess the privacy of mechanisms. Our approach is efficient; similar to the recent work of Steinke, Nasr, and Jagielski (2023), our auditing procedure leverages the randomness of examples in the input dataset and requires only a single (training) run of the target mechanism. And it is more accurate; we provide a novel analysis that enables us to achieve tight empirical privacy estimates by using the hypothesized $f$-DP curve of the mechanism, which provides a more accurate measure of privacy than the traditional $\epsilon,\delta$ differential privacy parameters. We use our auditing procure and analysis to obtain empirical privacy, demonstrating that our auditing procedure delivers tighter privacy estimates.
[ "Differential privacy", "Auditing privacy" ]
Reject
https://openreview.net/pdf?id=0QZcoGdmtJ
https://openreview.net/forum?id=0QZcoGdmtJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "gQJZE1F2rb", "agMU5QZQKO", "aJimoVNvH7", "WzJ9NpNcOq", "TtNX4Ijlhd", "SGFHR8WNwy", "L3f0R5gRO6", "Hf3kZTkDMz", "GoDEPmHKcn", "Ep9h4Hv3Do", "CiOiFYxuso", "At8jTJ4Rvm", "7HmHBPdOKw", "75EUwQ7c3B", "6yJpGhLwDA", "5W5ut2mbJk", "5GzXIZd1si", "2qT7Prifan", "2IcLKvzDyM", "1ZqbNF7y31", "0Ixn0cWe8d" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1733198440088, 1730708251501, 1732613014370, 1733194278070, 1734204398335, 1733197190094, 1732227269296, 1730711711446, 1732227500302, 1737524174824, 1733182145148, 1733179840034, 1730281710774, 1732228020914, 1732526439335, 1733197179998, 1733181530055, 1732227374637, 1733222285509, 1732564272952, 1732576974544 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Submission12235/Reviewer_QcyN" ], [ "ICLR.cc/2025/Conference/Submission12235/Reviewer_R2zh" ], [ "ICLR.cc/2025/Conference/Submission12235/Reviewer_xLjh" ], [ "ICLR.cc/2025/Conference/Submission12235/Area_Chair_xeGY" ], [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Submission12235/Reviewer_CU1c" ], [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Submission12235/Reviewer_R2zh" ], [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Submission12235/Reviewer_CU1c" ], [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Submission12235/Reviewer_QcyN" ], [ "ICLR.cc/2025/Conference/Submission12235/Authors" ], [ "ICLR.cc/2025/Conference/Submission12235/Reviewer_xLjh" ] ], "structured_content_str": [ "{\"comment\": \"We greatly appreciate all your comments and feedback. We hope that our response adequately addresses your questions and concerns. We would be grateful for any additional feedback and suggestions you might have.\\n\\nPlease also accept our apologies for our response to your feedback being posted later than others. Your review came in later, and we were in the process of completing some experiments before posting our response.\"}", "{\"summary\": \"The paper presents a novel algorithm designed to audit $f$-DP guarantees within a single execution of a mechanism.\\nThis area of research has become increasingly significant within the privacy community, particularly due to the limitations of existing auditing mechanisms.\\nExisting empirical auditing methods are either computationally expensive (requiring multiple runs of the machine learning algorithm) or fall short in providing a tight empirical privacy guarantee.\\nThe need to run the mechanism multiple times has hindered practical applications.\\nSteinke et al. (2023) introduced a pioneering approach that balances the number of runs with the tightness of the audit.\\nThis present work enhances this trade-off further by auditing $f$-DP guarantees, which provide a more precise representation of a mechanism's privacy compared to traditional approximate DP parameters.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Valuable Contribution to Existing Research: There has been extensive work on auditing differential privacy guarantees. This paper distinguishes itself by offering a solution that enhances both computational efficiency and the precision of empirical privacy guarantees. The reliance on multiple runs of the mechanism has been a major obstacle to the widespread application of auditing methods. Their approach, requiring only a single run, makes auditing significantly more practical, especially for complex machine-learning algorithms involving extensive model training.\", \"Using the $f$-DP framework is a particularly strong aspect of this work. $f$-DP offers a more general and accurate representation of a mechanism's privacy compared to traditional approximate differential privacy. This choice allows for a more fine-grained and robust analysis of privacy. The authors convincingly demonstrate that auditing $f$-DP leads to tighter empirical privacy assessments. By performing the analysis in a single training run, the paper achieves a more comprehensive understanding of the privacy implications within a practical computational framework.\"], \"weaknesses\": [\"The main weakness of this paper is its presentation. The write-up seems very rushed which at times hinders the flow of the reader. Many references are broken e.g. reference to Algorithm B. Lines 300-312 contain many typos and incomplete sentences. These are issues that can be addressed quickly but in the current state I would argue that the presentations limits the value of this work to the community.\", \"The authors have not provided a code artifact. While the contributions of this work are mostly theoretical, the implementation of the algorithm requires care and it would help reproducibility if a code artifact were supplied.\"], \"questions\": [\"On the section \\u201cEmpirical Privacy\\u201d line no 307, why do the trade off curves need to be ordered? If you have a set of trade off curves $f_i$ that pass couldn\\u2019t you build a new trade off curve $f(x) = \\\\min_i f_i(x)$\", \"In what sense are the empirical results tight in Fig 7 and why is that not also evident in Fig 1?\", \"Can you explain why abstentions are important in this algorithm?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"My comments have been adequately addressed and I believe the paper meets the criteria for acceptance.\"}", "{\"comment\": \"I would appreciate if the authors could answer the questions that were raised in my review.\"}", "{\"metareview\": \"## Summary of Contributions\\n\\nThis paper studies auditing of differential privacy (DP) with one run of the algorithm. Previous work (e.g. (Steinke et al., 2023)) studies the setting where we wish to test whether the algorithm satisfies $(\\\\epsilon, \\\\delta)$-DP for a single pair of $(\\\\epsilon, \\\\delta)$. This work extends the test to the setting of *$f$-DP*, which--roughly speaking--gives an entire curve for the values of $(\\\\epsilon, \\\\delta)$. By deriving sharper bounds on the accuracy of the guessing game assuming $f$-DP (Theorem 9), the authors show that $f$-DP allows for a better auditing compared to $(\\\\epsilon, \\\\delta)$-DP; this (partially) answers an open question from (Steinke et al., 2023). The authors also demonstrate this numerically, showing improvements in auditing the Gaussian mechanism and DP-SGD.\\n\\n## Strengths\\n\\n- The paper makes a clear contribution towards privacy auditing in one run, using an additional concept of $f$-DP. This is novel as previous works have only consider a single $(\\\\epsilon, \\\\delta)$ pair when auditing. Furthermore, $f$-DP is an important concept that has led to tight numerical privacy accounting in recent years and, thus, this is a well motivated and important setting.\\n\\n- Empirically, the method has shown to significantly improve the empirical privacy from auditing compared to previous work.\\n\\n## Weaknesses\\n\\n- **Presentation**: Multiple reviewers find the presentation qualities to be inadequate. Some central concepts (e.g. empirical privacy) and details were not defined in the original version of the submission. Discussions on the experiment details and effects of different parameters were also insufficient.\\n\\n- **Significant Changes in the revision**: Although these have been added in the revision, the changes are quite significant (as can be seen by the diff in the latest supplementary material; perhaps ~30% of the main text). Some of the potential issues--which are only apparent after the change--are listed below. Due to this, it might be best if the paper is re-submitted and fully reviewed again rather than accepted as is.\\n\\n- **Possible Flaw in Empirical Privacy**: The \\\"empirical privacy\\\" (Definition 7) used in this paper might be flawed and incomparable with previous work e.g. Steinke et al. (2023). To explain this, let us assume that the accuracy is 100% and consider the scenario where the test fails; what Steinke et al. (2023) works say is that the algorithm is *not $(\\\\epsilon, \\\\delta)$-DP*. However, what the test in this paper shows is that the algorithm is *not $f$-DP*. In this latter scenario, we do *not* know where the failure comes from; it could be that it fails for $\\\\delta = 10^{-5}$ or at $\\\\delta = 10^{-7}$ etc. As a result, we cannot simply pick a single $\\\\delta$ and report the $\\\\epsilon$ at that $\\\\delta$, because the violation might not happen there. Due to this, it is unfair to compare to previous work in a manner done in Section 4.1 (and this is not discussed at all in the paper).\\n\\n- **Practicality / Limitations**: Related to the above, this also highlights the fact that we need to pick the $f$ curve very carefully to get any meaningful result from such a testing at all. This will likely limit the practicality of the algorithm significantly since, for more complicated algorithms other than Gaussian mechanism, the $f$ curve is not well understood and computing them can be numerically challenging. (The choice of $f$ is mentioned briefly before Section 4.1 but the discussion does not dive into such subtlety.)\\n\\n## Recommendation\\n\\nAlthough auditing $f$-DP is an important direction and this paper provides some interesting initial work, more investigation / clarification should be done before the paper is ready for publication. Furthermore, it would be useful for the paper to get another proper full end-to-end review it deserves, given the significant changed during rebuttal. Due to these, we recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"As mentioned above, the original version of the submission has many presentation issues, e.g. missing many important concepts / discussions, such as \\\"empirical privacy\\\" which is central to the paper's results. The rebuttal / revision was mostly trying to alleviate this. As stated in the meta-review, this results in a huge amount of change, some of which seems to reveal some additional flaws. Given this, I strongly suggest this paper be rejected so that it can be reviewed more thoroughly.\"}", "{\"comment\": \"- **Can you discuss more how the design of the set of canaries impacts the reconstruction games?**\\n\\nThe best case for reconstruction games is to select canaries in a really distinct way so that the adversary's job in detecting the selected canary becomes easier. However, we emphasize that the choice of canaries should be aligned with the distribution of data to avoid degradation in utility. This is especially important in the one-run setting, where utility is crucial. For instance, in the case of reconstruction game. You can consider choosing a random sample from CIFAR dataset, and then creating 10 version of this sample by adding a small number in the top corner of the image. Then the role of adversary is to identify which image is more likely to be used. We expect this not to have much effect on the utility of the model because the interference from canaries would not change the data distribution. Alternatively, you can use augmentation techniques to augment a sample in 10 different ways and then decide which augmentation was used. The modular nature of our analysis makes it possible to use any attack and canary selection setup and obtain empirical privacy. Although this way of selecting canaries will not be as effective as creating canaries in an adversarial manner, they can still obtain empirical privacy numbers that are close to the theoretical bounds.\"}", "{\"title\": \"Thank you for your constructive feedback.\", \"comment\": \"We thank the reviewer for their constructive feedbacks. Addressing your comments has improved our paper and we appreciate that.\\n\\n> Could you elaborate on why your bounds continue to improve with more canaries while the bounds in previous work degrade? What underlying mechanisms in your algorithm contribute to this improvement? \\n\\nThank you for raising this question. We added a discussion on why this effect happens (See page 10 line 490). The reason the bound of Steinke et al degrades as the number of samples increases beyond a certain point is because of the way their bound depends on the $delta$ term. Specifically, they have an $O(m\\\\delta)$ term in their upper bound that starts to dominate as $m$ increases. For us, this effect does not happen because our analysis does not rely on $delta$ at a single $\\\\epsilon$ to bound the probability of bad events. Our reliance on $f$-curve enables us to keep improving with more canaries.\\n\\n> What potential sources contribute to any lack of tightness in your lower bounds? Are there specific aspects of the f-DP framework or your implementation that introduce looseness? How might these be addressed in future work to enhance the tightness of the bounds?\\n\\n\\nThis is a great question. We also touched on this in the same paragraph referenced above (Page 10 line 516). We believe our Theorem 9 is tight. However, the way we use theorem 9 to bound the tail is still subject to improvement. There is a subtle point of sub-optimality in our Algorithm 10. In this algorithm, we make use of the fact that the expectation of correct guesses, conditioned on the correct guesses being greater than c divided by the expectation of incorrect guesses conditioned on the same event is greater than c/c\\u2032. This step is not tight as we cannot have a mechanism where the adversary makes exactly c correct guesses with probability greater than 0, while making more than c correct guesses with probability exactly 0.\\n\\nWe again emphasize that our Theorem 9 is tight, we only need to find a more optimal way to find the worst case profile that is consistent with 9. Currently Algorithm 10 is the best we have, but that could improve in future work.\\n\\n\\n\\n> How does your algorithm perform in the black-box setting compared to the white-box setting? Can you provide detailed experimental results illustrating this performance?\\n\\nThank you for this suggestion. We have performed two sets of experiments in the black-box setting and provided the results in the Experiments section. In the first experiment, we use the same setup to that of Steinke et al with m=n=1000 and perform black-box attacks against models trained on CIFAR10 using DP-SGD. We use the same black-box membership inference attacks as them and obtain empirical privacy. We outperform their bounds in all settings. (See Figure 2)\\n\\nWe also performed an experiment on non-private models trained on CIFRA10. A benefit of empirical privacy is that it can be calculated on models that are not theoretically private but are hypothesized to be private. Hence, we use the state-of-the-art for membership inference attacks on CIFAR10 to obtain numbers on attack success and calculated empirical privacy. In these experiments we also outperform Steinke et al in our empirical privacy measurements. (See Figure 4)\\n\\n> Writing and Presentation Quality:\\n\\nWe enhanced the quality of text and resolved the typos and missing references. We appreciate your feedback.\"}", "{\"summary\": \"The paper presents a novel algorithm for auditing differential privacy (DP) mechanisms in a single run, building upon and extending the work of Steinke et al. (2023). By leveraging the theoretical framework of f-Differential Privacy (f-DP), the authors provide tighter lower bounds on privacy leakage, thereby enhancing the existing toolbox for f-DP analysis. Notably, their auditing algorithm can be applied to various adversaries beyond the standard membership inference, such as reconstruction attacks.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"**Advancement of f-DP Tools**: The paper contributes to the understanding and practical application of f-DP, which could be of independent interest.\", \"**Interesting Problem**: Auditing DP mechanisms in a one-run scenario is interesting for practical implementations (particularly in the black-box scenario, see the weakness section), and the paper makes significant progress in this area.\", \"**Experimental Validation**: The experimental results are compelling and demonstrate the effectiveness of the proposed approach.\", \"**Versatility in Adversarial Models**: Extending the auditing algorithm to handle different adversaries, such as reconstruction attacks.\"], \"weaknesses\": [\"The authors investigate exciting problems and provide interesting results. I encourage the authors to continue working on these results, as they are sound and exciting to the DP community. However, I don\\u2019t think the work is ready to be published in its current form, as it is somewhat rushed. I sketch my main concerns below.\", \"**Writing and Presentation Quality**: The manuscript contains several errors and unclear explanations. The authors should revise it before publication, as there are plenty of writing errors and bad citing style.\", \"**Unreferenced Figures and Results**: Some results, particularly those in Figure 7, need to be adequately referenced or explained within the text, leading to confusion about their significance.\", \"**Incomplete Explanation of Gaps**: The paper needs to explain the gaps between theoretical and lower bounds. Possible reasons for these gaps should be analysed, such as limitations of the f-DP framework, assumptions made in the analysis, or practical considerations in implementation.\", \"**Insufficient Experimental Details**: There are no experiments in the black-box setting for which we are compelled to use one-shot auditing. The white-box setting enjoys a tight and efficient auditing algorithm (Nasr et al., 2023), while the black-box algorithms are rather expensive.\"], \"questions\": [\"Questions:\", \"You claim that your approach achieves tighter results as the number of canaries increases, outperforming the empirical privacy results from Steinke et al. (2023), suggesting that the results can be tight as we increase the number of canaries. Could you elaborate on why your bounds continue to improve with more canaries while the bounds in previous work degrade? What underlying mechanisms in your algorithm contribute to this improvement? Citing the authors: \\u201d Figure 1 demonstrates that our approach outperforms the empirical privacy results from Steinke et al. Interestingly, while the bound in Steinke et al. (2023) degrades as the number of canaries increases, our bounds continue to improve.\\u201d\", \"What potential sources contribute to any lack of tightness in your lower bounds? Are there specific aspects of the f-DP framework or your implementation that introduce looseness? How might these be addressed in future work to enhance the tightness of the bounds?\", \"How does your algorithm perform in the black-box setting compared to the white-box setting? Can you provide detailed experimental results illustrating this performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your constructive feedbacks\", \"comment\": \"We thank the reviewer for their careful reading of the paper and constructive feedback.\\n\\n> clarify that by \\\"one run\\\" you mean a training run (rather than an inference run)\\n\\nWe clarified in the abstract and also added a footnote to clarify whenever we say one run, we mean one training run. (Line 106)\\nexplicitly state the limitation of Steinke et al. (2023) that you are addressing (in Line 80-82).\\nWe explicitly discuss the limitation now. Here\\u2019s our updated paragraph: \\u201cSteinke et al. highlighted a limitation in their approach in auditing specific mechanisms, such as the Gaussian mechanism. They correctly argue that simplifying the mechanism's behavior to just two parameters, $(\\\\epsilon,\\\\delta)$, results in sub-optimal auditing of specific mechanisms. In other words, the effectiveness of membership inference attacks against the Gaussian mechanism differs significantly from predictions based solely on the $(\\\\epsilon,\\\\delta)$ parameters. To overcome this limitation, we propose auditing the entire privacy curve of a mechanism, rather than focusing solely on $(\\\\epsilon,\\\\delta)$. \\u201c\\n\\n> change the references to algorithm 3.1 to algorithm 3 (given that that is what the algorithm is called)\\n\\nWe fixed this problem. Thanks for your careful reading. \\n\\n> remove double mentions of Steinke et al. by just using the reference instead (e.g., in Line 420)\\n\\nWe corrected our references. \\n\\n> Reading flow and typos:\\n\\nWe enhanced the writing and also reading flow in the paper. We thank the reviewer for their suggestions!\\n\\n> What do you mean by \\\"gubernatorial analysis\\\"? (Line 95)\\n\\nThis was a typo (created by autocorrect!). We meant \\u201ccombinatorial analysis\\u201d. This refers to the part of analysis where we randomly shuffle the order of canaries and use the fact that this ordering and perform a double counting argument. \\n\\n> Do you have an intuition why the bound in Steinke et al. (2023) degrades with higher numbers of canaries while your bounds continue to improve?\\n\\nWe have a discussion about this in the experiment section. See page 10 line 490). The reason the bound of Steinke et al degrades as the number of samples increases beyond a certain point is because of the way their bound depends on the $delta$ term. Specifically, they have an $O(m\\\\delta)$ term in their upper bound that starts to dominate as $m$ increases. For us, this effect does not happen because our analysis does not rely on $delta$ at a single $\\\\epsilon$ to bound the probability of bad events. Our reliance on $f$-curve enables us to keep improving with more canaries.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer CU1c,\\n\\nThank you once again for your hard work in reviewing our paper. We would greatly appreciate any further comments you may have. Additionally, we would be grateful if you could let us know whether your initial comments have been adequately addressed, regardless of whether you decide to modify your score. Our goal is to ensure that all your feedback is incorporated into the next iteration of the paper.\"}", "{\"comment\": \"Thank you for again your feedback and for taking the time to review the paper. We are glad to hear that your comments have been adequately addressed.\"}", "{\"summary\": \"This paper proposes a computationally efficient privacy auditing procedure by leveraging the f-DP curve, and shows that the resulting lower bounds are tighter than those of previous work.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper is well-motivated and, for the most part, clearly written. It provides a notable improvement over prior privacy auditing techniques.\", \"weaknesses\": \"The paper contains some ambiguities and cosmetic errors that should be addressed to improve clarity and overall presentation.\\n1) clarify that by \\\"one run\\\" you mean a training run (rather than an inference run)\\n2) explicitly state the limitation of Steinke et al. (2023) that you are addressing (in Line 80-82)\\n3) change the references to algorithm 3.1 to algorithm 3 (given that that is what the algorithm is called)\\n4) remove double mentions of Steinke et al. by just using the reference instead (e.g., in Line 420)\\n5) fix the reading flow in Definition 6 (second bullet point is not a full sentence)\\n6) correct typos (e.g., Line 194/195, 307, 466, 505/506) and wrong capitalizations in the middle of sentences (e.g., Line 100)\", \"questions\": \"1) What do you mean by \\\"gubernatorial analysis\\\"? (Line 95)\\n2) Do you have an intuition why the bound in Steinke et al. (2023) degrades with higher numbers of canaries while your bounds continue to improve?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"A summary of changes in our revision\", \"comment\": [\"We thank all reviewers for their constructive feedback. We have updated the paper to address the reviewers concerns and suggestions. To summarize, we have made the following changes:\", \"Added clarification, discussion and insights on why our bounds behave differently from previous work. (Paragraphs starting in lines 84, 186, 216 , 492, 1185)\", \"We ran new experiments in the black-box settings and compared our bounds with previous work. (Figures 2 and 4 in the revised paper.)\", \"We enhanced the presentation of the paper, fixed typos and resolved broken references.\", \"Added a definition for empirical privacy estimation to clarify how we exactly measure empirical privacy. (Definition 7 and revised version of definition 6)\", \"Added code snippet to address the reproducibility concerns raised by reviewers. (Pages 24-27)\", \"Moved the proof and some of the experiments to appendix to open space for the new discussions and experiments.\", \"Added more discussion on previous work on privacy attacks and auditing. (Page 5, lines 255-273)\", \"We hope these changes have addressed the reviewers' concerns and would welcome any additional feedback.\"]}", "{\"comment\": \"I appreciate the effort that went into rewriting the paper. However, I would have appreciated it if the authors had respected the guidelines and appropriately highlighted the changes they made to the manuscript.\\n\\nI would most likely champion acceptance if the current version was the initial final one. However, I am tempted to keep my reject stance, as the rebuttal is meant to discuss and marginally improve the manuscript, not provide a new one. I feel like the authors have abused the rebuttal phase to heavily rewrite the main body of the paper, which now would require a new review. This defeats the purpose, in my opinion, of rebuttals.\\n\\nThat being said, I find the work valuable, but I think it is only fair to resubmit it directly in this new version.\"}", "{\"title\": \"Thank you for your constructive feedback!\", \"comment\": \"We thank the reviewer for their valuable comments and feedback. We have completed some experiments per your suggestions and are in the process of running more. We plan to include all these in the next iteration of the paper. Please find our responses to individual questions below.\\n\\n- **Some notions, such as the concept of empirical privacy, could have been more formally defined.**\\n\\n We have updated our draft with more details on how we define the notion of empirical privacy. Definition 7 in the revised version of the paper precisely defines empirical privacy. To calculate empirical privacy, we find the strongest $f$-DP curve that will pass our audit and then calculate \\\\(\\\\epsilon\\\\) and \\\\(\\\\delta\\\\) for that curve.\\n\\n- **Additional experiments on at least two other datasets should be conducted.**\\n\\n Thank you for your suggestion. We performed a new experiment on the Purchase dataset. Purchase is a tabular dataset consisting of 600 numerical columns. We used 25,000 canaries and varied the number of guesses. We achieve an empirical $\\\\epsilon \\\\approx 7$, while Steinke et al.'s approach achieves $\\\\epsilon \\\\approx 4.5$. We are also in the process of running experiments with the AGNews dataset and will include it in the final version. Note that for all experiments, we expect to perform better than Steinke et al. due to our tighter analysis.\\n\\n- **The number of canaries needed for the experiments is very high and is likely to have a significant impact on the utility of the classifier learned.**\\n\\n We agree with the reviewer that the design of canaries is important. However, it is orthogonal to our work. Our auditing procedure can be instantiated with any set of canaries and any attack. Our contribution lies in analyzing empirical privacy better than prior work, for any given instantiation of attack and canary selection. For our black-box experiments on CIFAR-10, for example, we choose samples from the dataset as our canaries. These canaries will not degrade the quality of the model because they are from the same distribution.\\n\\n- **Discuss the design of the function \\\\( f \\\\) that should be considered for the auditing.**\\n\\n We have added a new discussion on how to choose this function \\\\( f \\\\). In the revised paper, we state:\\n\\n > The family of trade-off functions should be chosen based on the expectations of the true privacy curve. For example, if one expects the privacy curve of a mechanism to be similar to that of a Gaussian mechanism, then they would choose the set of all trade-off functions imposed by a Gaussian mechanism as the family. For example, many believe that in the hidden state model of privacy (Ye & Shokri, 2022), the final model would behave like a Gaussian mechanism with higher noise than what is expected from the accounting in the white-box model (where we assume we release all the intermediate models). Although we may not be able to prove this hypothesis, we can use our framework to calculate the empirical privacy while assuming that the behavior of the final model would be similar to that of a Gaussian mechanism.\\n\\n- **Can the approach be used on classifiers for other types of data, such as tabular data?**\\n\\n Yes, as mentioned above, we have new experiments on the Purchase dataset. In these experiments, our set of canaries is selected from the data distribution, and we use state-of-the-art membership inference attacks ([https://github.com/privacytrustlab/ml_privacy_meter](https://github.com/privacytrustlab/ml_privacy_meter)) to obtain empirical privacy. Our empirical privacy numbers are better than Steinke et al.'s estimates.\\n\\n- **Apart from the classical example of differential privacy, can you provide a few other examples of function \\\\( f \\\\) that could be audited using your framework?**\\n\\n We can audit any \\\\( f \\\\)-DP curve. In this work, we focused on Gaussian and sub-sampled mechanisms. However, we can also consider mechanisms such as Laplace, Exponential, and randomized response mechanisms. As long as we have the \\\\( f \\\\)-DP curve for the mechanism, we can plug it into our Algorithm 3 and obtain empirical privacy. We are in the process of running some experiments with the Laplace mechanism to demonstrate this and will include it in the next iteration of the paper.\\n\\n- **There are some minor typos.**\\n\\n Thank you for pointing out these typos. We have fixed them all in our revised version.\"}", "{\"comment\": \"Dear Reviewer QcyN,\\n\\nThank you once again for your valuable and constructive comments. We would be grateful if you could let us know whether your concerns and questions have been adequately addressed. We found your feedback really valuable and want to make sure to incorporate them in the next iteration of the paper.\"}", "{\"title\": \"Thank you for your constructive comments.\", \"comment\": \"We thank the reviewer for their careful read of the paper and constructive comments. They helped us improve the quality of the paper.\\n\\n> The main weakness of this paper is its presentation.\\n\\nWe have significantly enhanced the presentation of the paper. These improvements are both in writing quality and also some new discussions and definitions to enhance the readability. We appreciate your feedback.\\n\\n> The authors have not provided a code artifact. While the contributions of this work are mostly theoretical, the implementation of the algorithm requires care and it would help reproducibility if a code artifact were supplied.\\n\\n\\nWe agree with the reviewer that implementation of the algorithms requires care. We added code snippets for all of the key algorithms for our auditing procedure. These can be found in appendix (pages 23-28). \\n\\n> On the section \\u201cEmpirical Privacy\\u201d line no 307 (line 383 in the revision), why do the trade off curves need to be ordered? If you have a set of trade off curves that pass, couldn't you build a new trade off curve $f(x)=min_i f_i(x)$.\\n\\nYou are right, they don't have to be ordered. We can find the maximal set of all privacy curves that pass and calculate the empirical privacy based on that. The only problem with your formulation is that we need to take the max, and not the min (this is because smaller $f$ translates to a weaker privacy guarantee). We formulate this in a new definition (see Definition 7 for empirical privacy and other corresponding definitions) and clarify that we don't need to have an ordered set. \\n\\n> In what sense are the empirical results tight in Fig 7 (Fig 11 in the revision) and why is that not also evident in Fig 1?\\n\\nWe rephrased our claim. We now state that our estimation of delta and epsilon captures the true behavior of epsilon and delta whereas for Stineke et al it does not. The reason Fig 7 (Figure 11 in revised draft) looks much tighter than Fig 1 is because we were using a different set of confidence intervals to calculate the bounds in figure 7 (Figure 11 in revised draft). We realized that this particular plot was created with 1-confidence instead of confidence. We regenerated the plot and updated the paper with the new plot (see Figures 10 and 11). The curves still reflect the true behavior of $(epsilon, \\\\delta)$ but with a larger gap from the ground truth.\\n\\n Thank you for your careful review of the paper! Also note that we have moved this experiment to appendix to open space for other discussions and experiments. We will be happy to bring them back if you find them necessary. \\n\\n> Can you explain why abstentions are important in this algorithm?\\n\\nConsider a random variable representing the ratio of correct guesses ($c$) to total guesses ($c'$). Note that the core component of our auditing procedure is a way of calculating tail bounds on this random variable. If we reduce the number of guesses the variance of this ratio tends to decrease because the ratio approaches 1 (the adversary can make more correct guesses when we decrease $c'$). Conversely, if we increase the number of guesses, the variance can also decrease because having more guesses generally leads to a more stable average, owing to the law of large numbers. This balance makes the number of guesses a crucial factor to optimize for. We have added a discussion about this right after our ablation on the number of guesses. (Note that this experiment and the ablation is now in the appendix, page 22, line 1185.\"}", "{\"comment\": \"Thank you for the clarifications. While I appreciate the significant improvements that went into the revised version, I also agree with reviewer CU1c that it is not intended to submit an entirely new version in the rebuttal phase. Nevertheless, I believe that the paper is of value to the community and I have therefore increased my score. I would encourage the authors to submit a more final version next time as this will also lead to higher quality reviews.\"}", "{\"comment\": \"We are glad that your evaluation of the revised draft is positive. Regarding the changes in the paper, we had originally provided a list of changes as a general comment to all reviewers. All these changes were made to address reviewers requests. We have also included a supplementary PDF document that highlights the differences between the original and revised versions of the paper. In this document, text highlighted in blue indicates new additions, red shows removed text, and green marks text that has been moved (We recommend viewing this document in a two-page view for the best experience).\\n\\nWe want to emphasize that our intention was not to misuse the rebuttal phase. As demonstrated in the diff file, all changes were made with the sole purpose of addressing the reviewers' concerns to the best of our ability. We found the feedback extremely constructive and have worked diligently to incorporate it into our paper. \\n\\nWe will also appreciate if the reviewer would comment on the changes we highlighted in our rebuttal. Has our rebuttal addressed your concerns and answered your questions about comparison with previous work and also the experiments in the black-box setting? These points were important to clarify and we want to make sure they are sufficiently addressed.\"}", "{\"summary\": \"This paper proposes an approach for auditing the guarantees of a differentially-private algorithm, which in contrast to other existing auditing schemes, does not require re-training of the model. In addition, the approach provides tighter bounds that the related work by Steinke et al.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"-The existing literature on privacy auditing is clearly reviewed as well as the main limitations of existing approaches.\\n\\n-The paper is well-written and easy to read. The authors have also provide a clear introduction of the notions necessary to the understanding of their work such as the f-DP curve.\\n\\n-The proposed approach has the benefit of only requiring a single run of the mechanism. The relationship with the method of Steinke at al. is also clearly explained. One of the main novelty of the approach is also to connect the privacy auditing procedure with previous works bounding the accuracy of reconstruction attacks.\\n\\n-The experiments conducted demonstrate that the approach proposed performed better in terms of the tightness of the bound estimated when the noise added is a a low to high regime.\", \"weaknesses\": \"-Some notions such as the concept of empirical privacy could have been more formally defined. Other important details are missing such as the relationship between the way canaries are designed and the quality of their possible reconstruction as well as the design of the function f that should be considered for the auditing.\\n\\n-The number of canaries needed for the experiments is very high and is likely to have a significant impact on the utility of the classifier learnt. While an experiment has been conducted with CIFAR-10 to measure the impact of the introduction of 5000 canaries more experiments should be conducted by varying the number of canaries to observe in a more fine-grained manner the impact of the introduction of canaries.\\n\\n-The approach has been validated empirically only on one dataset. Additional experiments at least two other datasets should be conducted to validate how well the approach generalizes to other settings. \\n\\n-There are some minor typos that could be corrected in a future revised version. For instance, \\\"(e.g., [1]).\\\" seems to refer to a different bibliography style. Other typos : \\\"an attack algorthm A\\\" should be \\\"an attack algorithm A\\\", \\\"in Essene\\\" should be \\\"in essence\\\" and \\\"augumented multiplicity\\\" should be \\\"augmented multiplicity\\\"\", \"questions\": \"-Can you discuss more how the design of the set of canaries impact the reconstruction games?\\n\\n-Apart from the classical example of differential privacy, can you provide a few other examples of function f that could be audited using your framework?\\n\\n-Can the approach be used on classifiers for other types of data such as for example tabular data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0QJPszYxpo
Extended Flow Matching : a Method of Conditional Generation with Generalized Continuity Equation
[ "Noboru Isobe", "Masanori Koyama", "Jinzhe Zhang", "Kenji Fukumizu", "Kohei Hayashi" ]
Conditional generative modeling (CGM), which approximates the conditional probability distribution of data given a condition, holds significant promise for generating new data across diverse representations. While CGM is crucial for generating images, video, and text, its application to scientific computing, such as molecular generation and physical simulations, is also highly anticipated. A key challenge in applying CGM to scientific fields is the sparseness of available data conditions, which requires extrapolation beyond observed conditions. This paper proposes the Extended Flow Matching (EFM) framework to address this challenge. EFM achieves smooth transitions in distributions when departing from observed conditions, avoiding the unfavorable changes seen in existing flow matching (FM) methods. By introducing a flow with respect to the conditional axis, EFM ensures that the conditional distribution changes gradually with the condition. Specifically, we apply an extended Monge--Kantorovich theory to conditional generative models, creating a framework for learning matrix fields in a generalized continuity equation instead of vector fields. Furthermore, by combining the concept of Dirichlet energy on Wasserstein spaces with Multi-Marginal Optimal Transport (MMOT), we derive an algorithm called MMOT-EFM. This algorithm controls the rate of change of the generated conditional distribution. Our proposed method outperforms existing methods in molecular generation tasks where conditions are sparsely observed.
[ "Flow Matching", "Generative Model" ]
Reject
https://openreview.net/pdf?id=0QJPszYxpo
https://openreview.net/forum?id=0QJPszYxpo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xukQrFNl6s", "x5NvhckCAc", "x4a6cri9Ij", "soxoYobhPg", "sZl8cbxdJO", "rQbGmfNQZ0", "oUl98QuoI2", "lJiD76YWAY", "lEm9kEXAlx", "fpdiMuqF43", "dyMk4bW8yr", "dU5SRNzCLo", "dTqhZko2mg", "cLEf3w5i2T", "bDvOThbVvf", "aEGfXvaiTW", "ZcyjnfxiJX", "ZCQ6k9Smns", "ZBxTLNWZ8a", "YE5R11N0ey", "VmQh9BZaxE", "TwyUAc5FzB", "RrA2RuNZ3N", "Nd42BmDWnv", "LjRsvL4yKN", "FoAIiVhgCe", "FgsDjTHIlV", "Edd3be7vQ7", "Dzrno0E3o3", "5SwUSf235m", "51yXp4Kee6", "4POgVFAP9a", "4GGtcObbXK", "27SwNcXWxC", "0DznB3Fsif" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733142732552, 1733218753178, 1732716088042, 1732246574155, 1730647524817, 1733159007312, 1732715382258, 1732164657273, 1732184715787, 1732164294315, 1732857503840, 1732714566254, 1732897461577, 1732974399067, 1732164123568, 1730699526899, 1733217671425, 1729370314185, 1732164861407, 1732164196800, 1732974000805, 1733019218654, 1732974855180, 1733217712765, 1732164430702, 1732164530160, 1732721071785, 1732565764801, 1732164394038, 1734605175319, 1737523786524, 1730658677204, 1732973729090, 1730475028270, 1733175461431 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_1CpW" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_QpJX" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_1CpW" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_QpJX" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_a59p" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_QpJX" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_a59p" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_QpJX" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_a59p" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_1CpW" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Area_Chair_piLa" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_mdMu" ], [ "ICLR.cc/2025/Conference/Submission6719/Authors" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_Cdwg" ], [ "ICLR.cc/2025/Conference/Submission6719/Reviewer_mdMu" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your insightful questions and for engaging in this discussion. Here are our responses:\\n\\n> Can the authors clarify where Theorem 5.10 in Villani, 2009 discusses the convergence of finite-sample approximations?\\n\\nWe apologize for the confusion. The correct reference is Theorem 5.20 (Stability of optimal transport). In this theorem, please consider the sequence $(\\\\mu_k)_k$ as the sequence of finite-sample approximations.\\n\\n> Since this conjecture and the finite sample convergence in the coupling of two distributions are an important part of the paper, can the authors provide citations which discuss convergence properties or proofs?\\n> Can the authors provide runtime on solving the finite-sample optimal transport for data of size ~ 12888 with a batch size of 128? More generally, for a single batch what is the run time for all the steps in section A as identified by the authors?\\n\\nFirstly, we do not consider the finite-sample convergence properties to be of paramount importance in this paper. Approximating the optimal transport coupling using finite samples is a well-established method in generative models and is not novel. For example, please refer to:\\n- Tong et al., TMLR, 2024\\n\\nAdditionally, the convergence of the coupling itself has been studied less than the convergence of the minimal transport cost due to the non-uniqueness of the coupling.\\n\\n> To the best of my understanding, $W_1$ is also an integral probability metric that requires solving a high-dimensional optimization over Lipschitz functions with Lipschitz constant less than 1.\\n\\nWhen computing the $W_1$ distance between two probability distributions $\\\\mu_{0}$ and $\\\\mu_1$ numerically, you can use the ``ot.emd`` function from the Python Optimal Transport library. This function does not use the formulation of the integral probability metric; see the documentation.\\n\\n> Optimizing the interpolant in step 3 of [Albergo et al. 2024] is significantly cheaper than solving A for EFM. Moreover, [Albergo et al. 2024] are able to scale their experiments up to 64 x 64 x 3 dimensions, significantly higher than the.\\n\\nWhile the optimization in Albergo et al. (2024) might be relatively cheaper, their method is only applicable when the conditions are discrete. Our method can be used when the conditions are continuous, meaning that there is a natural (Euclidean) distance between the conditions.\\n\\n> Also, can the authors explain what they mean by an impartial metric?\\n\\nIn the previous response, we used the word \\u201cimpartial\\u201d with respect to the specific dataset in question when using the \\u201ccategorical label.\\u201d We apologize for any confusion caused by our use of the word *metric* to refer to the metric or distance in the conditions, not the *evaluation* metric.\\n\\nOur task of generating a conditional distribution of unobserved $c$ in continuous space of conditions $\\\\Omega$ is mainly dependent on the *metric* on $\\\\Omega$ because it determines how close a given $c$ is to another. For example, if the target condition $c_*$ is very close to $c_0$ in one metric (say $d_1$), it is hoped that $\\\\mu_{c_*}$ is distributionally close to $\\\\mu_{c_0}$. This may not be the case for another metric (say $d_2$), where $d_2(c_*, c_0) \\\\gg d_1(c_*, c_0)$, leading to a vastly different $\\\\mu_{c_*}$ from $\\\\mu_{c_0}$. Our method aims to construct a model $\\\\mu\\\\colon \\\\Omega \\\\to \\\\mathcal{P}(D)$ such that $\\\\mu_c$ is as smooth in $c$ as possible.\\n\\nIn the MNIST example with categorical conditions, we believe there is no *impartial* metric on the conditions of *digits* when considering a conditional distribution of *images*, especially because there are no ground-truth datasets other than those used in training ($\\\\{0, \\u2026, 9\\\\}$). This is not the case for the molecule dataset, where an appropriate continuous label with a ground-truth dataset is present but not in the training dataset.\\n\\n> Thank you for adding the MNIST digits on CIFAR10 background experiment, can the authors provide details on the baselines as well as the metrics used.\\n\\nFor the baseline method, we used the method of (Zheng et al. [1]), where the same model as ours was used to train the velocity field $\\\\mathbb{R}^{d_x}\\\\times\\\\mathbb{R}\\\\times\\\\mathbb{R}^{d_c} \\\\ni(x, t, c) \\\\mapsto v(x,t,c)\\\\in\\\\mathbb{R}^{d_x}$, except that instead of outputting a matrix of size $(d_c + 1) \\\\times d_x$, we used a trainable linear map of size $(d_c + 1) \\\\times 1$ to convert the matrix to a vector of shape $1 \\\\times d_x$. We trained the model for the same number of iterations as ours. For evaluation, we used $W_1$ distance between the generated distribution $\\\\hat \\\\mu_c$ and the ground truth $\\\\mu_c$, approximated by applying ``ot.emd`` on batches of size 10000 sampled from $\\\\hat \\\\mu_c$ and $\\\\mu_c$.\\n\\nWe hope these responses address your concerns. We would greatly appreciate it if you could consider improving our rating based on this clarification. Thank you for your valuable feedback.\"}", "{\"comment\": \"I thank the authors for their response and additional experiments. This partially satisfies my main concern of empirical scalability and practical applicability as such I raise my score 5-->6.\", \"why_not_higher\": [\"While I like the idea of enforcing smoothness of predicted distribution relative to the condition space, I'm not sure the killer application has been identified.\", \"The experiments here are still limited in scale. Combined with the reasonable, but somewhat limited novelty on the theory side given prior work on conditional flow matching, I think less toy experiments would significantly improve the impact of this work.\"], \"why_not_lower\": [\"I believe the authors have demonstrated an interesting method towards generalizing conditional flows towards unseen (but related) conditions.\", \"I'm not concerned with mini-batch approximation errors and believe this leads to a relatively scalable algorithm.\", \"While shown in some somewhat niche experimental settings, I believe this work is valuable in its current state to those applying flow matching to scientific applications.\"]}", "{\"title\": \"Thank you very much for the followups and discussion! (Part2)\", \"comment\": \"However, we included in the section 7.2 of the revision the conditional generation of the MNIST dataset with a single image-net background, where the two conditions are \\u201ccolor\\u201d and \\u201crotation\\u201d that constitute the 4-dimensional condition-space $\\\\Omega=[0, 1]^4$(first dimension is the rotation, and the rests are RGB) . For the training, we used a conditional dataset corresponding to 12 uniform random samples from $\\\\Omega$, and we evaluated the Wasserstein distance from the ground truth (GT) conditional distributions in a manner similar to Figure 4. Note that these conditionings are nonlinear in the presence of background. We compared our method against the classifier-free guidance method (Zheng et al. [1]) with different guidance strengths.\\n\\n\\n> Can the authors give a precise, not a rough, set of differences to the multi-marginal modeling approach introduced in [Albergo et al. 2023] and the added advantage of their approach?\\n\\nOur method differs from [Albergo et al. 2023] in that we do not necessarily require the path optimization step (Corollary 3) in [Albergo et al. 2023]. \\nTo further clarify the difference, let us first provide our understanding of [Albergo et al. 2023] in a procedural format.\\n## [Albergo et al. 2023]\\u2018s approach\\nHere is the order of steps by which Albergo et al. construct a transportation plan:\\n1. Define the stochastic process (Barycentric Stochastic interpolant, eqn 5) $$x(\\\\alpha) = \\\\sum_i x_i \\\\alpha_i, ~~~~~ (x_0, \\u2026. x_n) \\\\in \\\\pi$$\\nwhere $\\\\pi$ is produced via barycentric interpolation of a set of optimal transport from $\\\\mu_0 \\\\to \\\\mu_k$ (eqn 18).\\n$mu_0$ is chosen as a uniformative distribution (e.g. Gaussian)\\nThis will define a map $\\\\alpha \\\\to \\\\mu_\\\\alpha$, where $x(\\\\alpha) \\\\sim \\\\mu_\\\\alpha$.\\n\\n2. Learn the vector field $g_k$ in the system of continuity equation for $(\\\\lbrace g_k \\\\rbrace , \\\\mu_\\\\alpha) $ eqn(7)\\nby leverating Theorem1.\\n\\n3. Given an endpoint distribution $\\\\rho_i, \\\\rho_j$, optimize the path $\\\\alpha : I \\\\to \\\\Omega$ with $\\\\alpha(0) = \\\\rho_i, \\\\alpha(1) = \\\\rho_j$\\nfor the energy $ \\\\int_0^1 \\\\mathbb{E}[| [g_1, \\u2026. g_n] \\\\dot \\\\alpha(\\\\alpha(t), t) |^2] dt $ , obtaining the optimal $\\\\alpha^*$\\n\\n4. Generate a path from $\\\\rho_i$ to $\\\\rho_j$ via the ODE $\\\\dot{x}(t) = [v_1, \\u2026. v_d] \\\\dot \\\\alpha^*(t) $\\n\\n## EFM\\nMeanwhile, this is the way we solve the interpolation problem\\n\\nA. \\n1. Obtain the coupling $\\\\pi$ of your choice over $ (x_1, \\u2026. x_n) $. In our paper, we present (i) MMOT and (ii) Generalized Geodesic. (ii) is\\nthe same as the coupling used in (1) above.\\n\\n2. Use (eqn 3.4) to construct a stochastic process $$\\\\psi(\\\\alpha) = \\\\phi(\\\\alpha | x_1, \\u2026. x_n), with (x_1, \\u2026. x_n) \\\\sim \\\\pi $$\\nThis results ina map $\\\\alpha \\\\to \\\\mu_\\\\alpha$, where $psi(\\\\alpha) \\\\sim \\\\mu_\\\\alpha$.\\n\\nB. \\n\\nLearn the matrix field $u$ in the generalized continuity equation for $(u, \\\\mu_\\\\alpha) $\\n\\nC. \\n\\nGiven an endpoint distribution $\\\\rho_{c_0}, \\\\rho_{c_1}$, generate a path from $\\\\rho_{c_0}$ to $\\\\rho_{c_1}$ via the ode\\n $\\\\dot{x}(t) = u \\\\dot{\\\\gamma}(t) $, where $\\\\gamma(t) = c_1- c_0$.\\n\\n## The difference between [Albergo et al. 2023] and EFM \\nEFM differs from [Albergo et al. 2023] in that our procedure does not require the equivalent of (3) in [Albergo et al. 2023]. In fact, the procedure [(1) (2) (3) (4)] of [Albergo et al. 2023] and the procedure [A, B, C] of EFM can be precisely aligned, and our absence of the requirement of (3) is the exact difference between them. \\n\\nTo be more precise, note that our A1 and A2 correspond to (1), B corresponds to (2), and C corresponds to (4). Also, $x(\\\\alpha)$ corresponds to our $\\\\psi(\\\\alpha)$, and $[g_1, \\u2026. g_n]$ corresponds to our $u$, and $\\\\alpha$ corresponds to $\\\\gamma$. \\nWe do not necessarily require (3) in our procedure because, instead of optimizing the path $\\\\alpha$ on the condition space that minimizes the pairwise kinetic energy through the weight $\\\\lbrace g_k \\\\rbrace$, we choose a process $\\\\psi$ on the observation space that (approximately) minimizes the multimarginal analogue of the kinetic energy that is Dirichlet energy of $u$.\\n## Advantages\\n- Our approach does not require the optimization of $\\\\alpha = \\\\gamma$ for every interpolation.\\n- Our approach can be modified to combine [Albergo et al. 2023] \\u2019s approach by including the analogue of their (3) before (C). \\nMore particularly, if we choose to execute [A1, A2, B, (3), C] in order, the target velocity field will be theoretically the same as Albergo\\u2019s approach when (i) we choose Generalized Geodesic in A1 and (ii) choose linear regression in A2.\\n\\nThis way, our method is complementary to [Albergo et al. 2023]. Our method offers \\u201can additional\\u201d venue that uses the optimization of the \\u201cstochastic process\\u201d itself.\"}", "{\"comment\": \"Thank you for your response to the review. I have a few more questions below:\\n\\n> Larger batch sizes tend to stabilize the learning process because the changes in the matching\\u00a0\\u03c0\\u00a0per iteration become smaller.\\n> \\n\\nCan the authors comment on the convergence of the finite-sample approximation of the coupling $\\\\pi$ to the coupling $\\\\pi^*$ which minimizes $E_{\\\\pi}\\\\|x_0 - x_1\\\\|_2^2$? Any empirical or theoretical analysis would make the case stronger. More specifically, as the authors claim on line 276, can the optimal coupling $\\\\pi^*$ be approximated by finite-samples? a proof or asymptotic analysis would be appreciated. \\n\\nFrom appendix D.3 it seems the cost of MMOT is prohibitive, therefore the authors propose an approximation. Can the authors discuss the effect of this approximation on the claim that they minimize an upper bound on the Dirichlet energy?\\n\\nI believe the authors have a typo in their rebuttal, there is no discussion of any Geo-EFM algorithm in appendix F, rather it contains a description of metrics, datasets and baselines. \\n\\n> Our MMOT-EFM is novel in that it minimizes the transport cost in a complementary way to the optimization of $\\\\gamma$\\n> \\n\\nCan the authors why a more expensive procedure is a better option? Can the authors provide a graph containing the computational burden of computing the plan $\\\\pi$ as the batch size increases for a high-dimensional (d > 100) dataset.\\n\\n> We apologize for the technical nature of the mathematical equations. We would like to consider your comments on the manuscript. Could you tell us exactly where we introduce the new notation before the definition?\\n> \\n1. the paragraph from lines 148-161, including equation 2.3. \\n\\n> Although there are infinite ways of extrapolation, it is reasonable to assume an inductive bias that the sensitivity of data in nature (e.g., molecules) to conditions (e.g., chemical properties) is not unnaturally large\\n> \\n\\nCan the authors show any examples of such an inductive bias helping in solving any high-dimensional inverse problems? For instance, with the MNIST/CIFAR10, etc datasets? Or class-conditional generation?\\n\\nCan the authors give a precise, not a rough, set of differences to the multi-marginal modeling approach introduced in [Albergo et al 2023] and the added advantage of their approach?\"}", "{\"summary\": \"This paper proposes an extension to flow matching to conditional generation on unknown (but related) conditions using a flow on both the data space and the condition space. A variant of this based on multi-marginal optimal transport is proposed as an extension to optimal transport conditional flow matching. 2D and conditional molecular generation experiments are performed showing conditional generation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Understanding how to extend current generative models to more general conditionals (especially unobserved conditionals) is an important problem particularly in the sciences.\", \"I enjoyed the symmetry of the presentation of first standard flow matching and OT-CFM settings followed by EFM and MMOT-EFM settings. Table 1 is great to understand the difference to OT-CFM.\", \"To the best of my knowledge the theory is correct and answers some of my questions on how one might generalize flow matching to condition-augmented spaces.\"], \"weaknesses\": [\"It would be great to make clearer to the reader how this method extends to unseen conditions. I think lines 402-405 kind of get at this, but I would have loved to see more emphasis on this point. It is very easy to design a conditional generative model that technically extends to unseen conditions, but it is much more difficult to enforce that that model extends in a reasonable way. EFM has the potential to guide that extension and I would love to see that point explored further.\", \"The algorithm is not yet useful in real applications. While the authors also acknowledge this, it\\u2019s still a large limitation of the impact of this work. The molecule experiment is extremely limited in terms of comparisons to existing work and overall training setup.\", \"Much of the theoretical statements are direct extensions from prior work.\"], \"questions\": \"When is MMOT-EFM and EFM in general expected to work better than COT-FM / Bayesian-FM? I know there is a short explanation on the differences in assumptions but it is difficult for me to translate what is gained when making a piecewise continuous assumption on p(x|c) vs. a measurability assumption. It\\u2019s not clear to me how this compares to these prior works in general.\\n\\nSmall comments that don\\u2019t affect the score: \\nThere appears to be an unfinished section D.5 in the appendix. \\nGG-EFM isn\\u2019t defined in the main text. \\nI didn\\u2019t understand the distinction between p_c and p_{0,c} line 170. \\nTypo on line 311 \\u201cton he\\u201d\\nShr\\\\\\u201dodinger to Schr\\\\\\u201ddinger line 425\\nThe source points in Figure 4 b and c (and corresponding appendix figs) are essentially invisible (grey against a grey background). It would be **really nice** to fix this. \\n\\n\\n### Overall\\nI think this work presents an interesting idea with promise to understand how these models generalize to unseen conditions. However, this is not explored theoretically. In addition the current method does not scale to practical settings at the moment. I think further investigation as to when the assumptions behind this method make sense relative to other methods would greatly strengthen this work. A better understanding of how this relates to prior literature and when this method is preferable would likely change my opinion of this work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> We apologize for the confusion. The correct reference is Theorem 5.20 (Stability of optimal transport).\\n\\nThank you for citing the correct reference. The reference mentions that the finite-sample approximations lead to consistent estimators of the optimal transport plan, however there are no mentions of producing unbiased estimates. Would that apply that using the finite-sample approximation, rather than the intractable transport plan, in the equations on lines 265 not yield an upper bound? So in effect, the authors are not able to upper bound the Dirichlet energy with their objective?\\n\\n> While the optimization in Albergo et al. (2024) might be relatively cheaper, their method is only applicable when the conditions are discrete.\\n\\nTheir method is applicable to marginals distributions with continuous-valued support, see definition 1. \\n\\n\\n> In the MNIST example with categorical conditions, we believe there is no impartial metric on the conditions of digits when considering a conditional distribution of images, especially because there are no ground-truth datasets other than those used in training $(0, \\\\dots, 9)$. \\n\\nThis is incorrect. In inverse problems, one can define a matrix $A$ and observations $y = A x + \\\\varepsilon$, where $x$ is the image and $\\\\varepsilon$ is mean zero noise, see Chung et al 2022 for several examples of $A$. Here, generating $p(x | y)$ is a problem that has been studied for a few decades now, and recently with diffusion models, and $y$ is considered a \\\"ground truth label\\\" since you are generating it using a linear forward process. \\n\\nMore typically, one can always consider generating labels $y = g(x)$ for a deterministic function $g$ and then learn the distribution $p(x | y)$ as the authors themselves do in their experiments, for instance rotation, generating a particular color, etc. \\n\\n> We trained the model for the same number of iterations as ours\\n\\nDid the authors use a pre-trained autoencoder? What model class, were any pre-trained models used, what batch size, the cost of running mmot, how many iterations, etc would be useful to understand the significance and difficulties of the proposed methods. \\n\\nCan the authors comment on why they have not done even a MNIST generation experiment. Not being able to scale to ~784 dimensions is not necessarily bad, however it would be useful if the authors can comment on what limitations the method faces when scaling to dimensions > 100.\", \"my_reasons_for_not_increasing_my_score_are\": \"1. presentation quality can be improved substantially, both for the methods and the experiments\\n2. the authors do not discuss the limitations of their work, particularly scaling when using large batch sizes and dimensions. \\n3. the above limitation can be why the authors use low-dimensional problems, limiting to 32 dimensions at most. \\n4. the methods section uses the optimal transport plan, at no point is there any discussion of the implications of using finite-sample and mini-batch optimal transport plans on the velocity field they learn. \\n\\nEven assuming that they have enough model capacity to learn a vector (or matrix) field, what are the implications of using finite-sample approximations of the transport plan? \\n\\n[Chung et al 2022] Diffusion Posterior Sampling for General Noisy Inverse Problems\"}", "{\"title\": \"Thank you very much for the followups and discussion! (Part1)\", \"comment\": \"Thank you very much for the follow-up, we would like to address your concerns below.\\n\\n>* Can the authors comment on the convergence of the finite-sample approximation of the coupling $\\\\pi$ to the coupling $\\\\pi^\\\\ast$ which minimizes $\\\\mathbb{E}_\\\\pi |x_0\\u2212x_1|_2^2$? Any empirical or theoretical analysis would make the case stronger. More specifically, as the authors claim on line 276, can the optimal coupling $\\\\pi^\\\\ast$ be approximated by finite-samples? a proof or asymptotic analysis would be appreciated.\\n\\nIt is known that the finite sample approximation converges asymptotically for any $\\\\pi$ that minimizes $\\\\mathbb{E}_\\\\pi |x_0\\u2212x_1|_2^2$, for example [[Theorem 5.10, Villani, 2009]](https://rdcu.be/d09GX). We also conjecture that the multi-marginal case in line 276 converges in a similar way.\\n\\n>* From appendix D.3 it seems the cost of MMOT is prohibitive, therefore the authors propose an approximation. Can the authors discuss the effect of this approximation on the claim that they minimize an upper bound on the Dirichlet energy?\\n\\nIn this approximation, MMOT is applied only at the cluster level, and the coupling between each cluster is conducted using a method of the user\\u2019s choice. Indeed, this has an effect of achieving the energy value that is greater than the upper bound shown in the objective function in 3.2 because, in the implementation, the infimum in $ \\\\inf_\\\\pi \\\\int_{D^{|A|} \\\\times \\\\Xi } \\\\| \\\\nabla_\\\\xi \\\\phi(\\\\xi | x_A) \\\\|^2 \\\\pi (d x_A) dc $ will be taken not over the space of all joint distributions $\\\\mathcal{P} ( D^{|A|}) $, but over its subset of form $\\\\pi_{\\\\mathrm{user}} ( x_A | \\\\lbrace m_{i} \\\\rbrace ) \\\\pi_{\\\\mathrm{cluster}}( \\\\lbrace m_{i} \\\\rbrace ) $ where $\\\\pi_{\\\\mathrm{cluster}}$ is computed with discrete MMOT over $\\\\lbrace m_{ik} \\\\rbrace$ and $ \\\\pi_{user}$ is chosen by the user. \\n\\nIndeed, $\\\\pi_{user}$ can be chosen to be couplings that respect the consideration to kinetic energy as well, such as the generalized Geodesics coupling based on optimal transport. \\nAlso, by definition, this approximation shall converge to the actual upper bound if we choose $|U_{ik}| = 1$ take the limit of $|B_i| \\\\to \\\\\\\\infty.$ \\n\\n> I believe the authors have a typo in their rebuttal, there is no discussion of any Geo-EFM algorithm in appendix F, rather it contains a description of metrics, datasets and baselines.\\n\\nWe are sorry for the typo. Our discussion of Geo-EFM in the currently uploaded version (modified: 21 Nov 2024) is provided in Section E, with the title *A REMARK ON GENERALIZED GEODESIC COUPLING(GGC) AND THE SAMPLING OF $\\\\bar{\\\\psi}$.* \\n\\n\\n> (Undefined notations at) the paragraph from lines 148-161, including equation 2.3.\\n\\nThank you for specifying the spot. We stated the definitions before making the statement in 148-161, just as below:\\n\\n>Let $\\\\psi$ be a random path such that $\\\\psi\\\\colon I \\\\rightarrow D$ is differentiable. Let $Q$ be a distribution over a space $H(I; D) \\\\coloneqq \\\\Set{\\\\psi\\\\colon I \\\\rightarrow D | \\\\psi \\\\text{ is differentiable}}$ of paths that map time $t\\\\in I$ to data $x \\\\in D$, and use $\\\\mu^\\\\psi_t$ to denote $\\\\delta_{\\\\psi(t)}$.\\n>With these definitions, we can present $\\\\mu = \\\\mu^Q$ from a random path $\\\\psi$ as\\n\\nWe have revised the phrases with similar expressions when we can. \\n\\n> Can the authors show any examples of such an inductive bias helping in solving any high-dimensional inverse problems? For instance, with the MNIST/CIFAR10, etc datasets? Or class-conditional generation?\\n\\nThe difficulty in providing the efficacy of our method on image benchmarks like MNIST/CIFAR10 is that the conditions in these datasets are (1) discrete and (2) there is no impartial metric on the space of conditions. \\nFor example, MNISTS are labeled with digits from 0 to 9, but in terms of image generation, 0 is not closer to 1 than it is to 9. One hot vector embedding is also not too reflective of actual image generation because 9 is indeed much closer to 4 in terms of an image than it is to 3. This is the reason why we restricted our experiments to the dataset like ZINC-250k of the form $(x, c(x))$, where $c(x)$ is a feature of $x$. While there are many datasets of this form in application, for example, in applied fields of science such as economics, biochemistry, and physics, benchmark datasets, models, and publicized architecture are difficult to obtain. (continued to next part)\"}", "{\"title\": \"Rebuttal by authors (Part 1)\", \"comment\": \"We appreciate your thorough review of our manuscript and your understanding of the motivation behind our research. Below, we address your concerns in detail. We would be grateful if you could reconsider our rating based on these clarifications.\\n\\n# For Weakness\\n\\n> * However, the glaring weakness is that there is not clear cut numerical use case shown. I would like to see a not toyish example where we actually need several conditions and the transport between them. Usually, in the classical inverse problems works there is an implicit geodesic path taken where $y_t=ty+(1\\u2212t)y$, since one does not need to alter the condition if posterior sampling is the ultimate goal. If one wants to do style transfer (which seems to be the second motivation of this paper), then one can simply use a conditional FM network which receives the two conditions (source and target) as inputs. Therefore, while theoretically neat I am not convinced of why the generalized continuity equation and a network which moves efficiently also in the condition space, is advantageous. The authors can convince me by providing a clear example where either i) the classical conditional algorithms are not applicable or ii) this approach significantly outperforms the other flow matching models.\\n\\nIndeed, the primary motivation of our paper is to address conditional generation in scenarios where the conditions are sparsely observed, a situation where \\\"the classical conditional algorithms are not applicable.\\\" For example, in the case of style transfer between two known conditions $c_1$ and $c_2$, as you mentioned, one can simply use a conditional FM network that receives the two conditions (source and target) as inputs. However, if either $c_1$ or $c_2$ is unavailable, this FM network cannot be used. Furthermore, even if all conditions $c_1, c_2, \\\\dots, c_M$ are known, and we wish to perform style transfer among them, we would need to train the FM network approximately $\\\\binom{M}{2}$ times, which becomes inefficient as the number of conditions $M$ increases. One of the advantages of our EFM approach is that it can handle the aforementioned intractable situations in FM with a single training of the matrix field model.\\n\\n> * The scaling in Nc and condition dimension seems to be bad. can you provide the run times for the molecular example also for the baselines? it only says in the appendix that they were completed within 4 hours, but I expect the baselines to train much quicker. Also latent space of a VAE is pretty low dimensional. Please provide training your conditional flow matching model on MNIST (no VAEs..), where the condition space is not discrete (i.e., for instance inpainting). Even if this does not fit your motivation, I would like to see the results in such a more standard example and this would improve my confidence in the scalability.\\n\\nAs you pointed out, the runtime for MMOT-EFM can become significantly longer compared to the baselines when $N_c$ and the condition dimension are large. However, the computation of MMOT should be independent of matrix field learning, which means that the runtime can be significantly reduced by optimizing the implementation, e.g., by introducing parallel computing.\\n\\nRegarding the runtime, the 4-hour runtime includes the evaluation phase. A significant amount of time was spent on evaluation, and the implementation needs to be fully optimized, which contributes to the longer runtime. We will provide the net runtime without the evaluation phase at a later date. As for the experiment on MNIST with a continuous condition space, such as inpainting, our method may not be well suited from a computational complexity perspective due to the high dimensionality of the condition vector.\\n\\n> * Appendix D5 and F are empty (or almost empty).\\n\\nWe have deleted these parts.\\n\\n> * you do not seem to provide any code. I find the algorithm description to be not perfectly clear, there I would very strongly suggest that you at least publish code for the toy example.\\n\\nWe have just submitted the code as supplementary material.\"}", "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"I read the rebuttal and you addressed my concern how the dirichlet energy relates to finding \\\"good\\\" posteriors. In style transfer or \\\"inverse problems\\\" it does not happen that $c_1$ or $c_2$ are unavailable (even a conditional generator would suffice I guess since one can take $c_1$ go into latent space, and sample from $c_2$). I see the point that one can \\\"interpolate\\\" between many conditions using your framework, however the lack of scalability is concerning. I would really appreciate a MNIST example or any imaging example even with a low dimension condition space. Then I would raise my score.\"}", "{\"title\": \"Rebuttal by authors\", \"comment\": \"We appreciate your thorough review of our manuscript. Below, we provide detailed responses to your inquiries. We hope that these clarifications will enhance your confidence in your evaluation.\\n\\n# For Questions\\n\\n> * Could the authors explain or give an intuition about the regression in MMOT (Eq. 3.4)?\\n\\nEquation (3.3) serves as a soft constraint for the boundary conditions in Equation (3.4). The meaning of Equation (3.4) is that the conditional distributions generated by the model should match the known data distributions $\\\\mu_\\\\xi$, $\\\\xi \\\\in A$. This is analogous to the requirement in standard FM that the endpoints of the probability path generated by the model correspond to the source and target distributions. In the case of FM, where only two distributions are considered, this can be achieved by simply connecting them with a straight line, eliminating the need for regression. However, in EFM, we need to consider three or more distributions simultaneously, making it hard to satisfy the constraints strictly. Therefore, we relax the constraints to be soft in this context.\\n\\n> * Could the authors show the extrapolation ability of their methods in a more realistic application of EFM, e.g. style transfer of images?\\n\\nOur method is designed to work in situations where the dimensionality of the conditioning vector is relatively low, and the conditions are sparsely observed. Such scenarios are common in the context of molecular generation, as demonstrated in \\u00a71 and \\u00a77.2. Therefore, EFM is expected to show extrapolation ability in a more practical molecular generation.\\n\\nWe have incorporated the aforementioned explanation into the introduction of the revised manuscript.\"}", "{\"title\": \"Global comments by authors\", \"comment\": \"We appreciate the detailed feedback on our paper. Many reviewers pointed out the unclear relationship between smoothness and extrapolation, as well as the scalability issues of MMOT-EFM. In response to these comments, we made the following revisions, which have been reflected in the revised manuscript.\\n\\n---\\n\\n## How to Impose Smoothness to Allow Extrapolation to Conditions\\n\\nFirst, we would like to clarify the meaning of \\\"smoothness,\\\" as mentioned in our motivation. **Our goal is to ensure that the conditional distribution $ p(x \\\\mid c) $ which we will generate is \\\"smooth,\\\" meaning it minimizes the Dirichlet energy as defined in Equation (3.2).** Intuitively, the Dirichlet energy represents the sensitivity of the distribution $ p(x \\\\mid c) $ with respect to the conditioning vector $ c $. Specifically, it holds that\\n\\n$$\\n\\\\operatorname{Dir}(p)= \\\\text{``}\\\\lim_{\\\\varepsilon\\\\to0}\\\\text{''} C_k \\\\iint_{\\\\Omega \\\\times \\\\Omega} \\\\frac{W_2^2(p(\\\\cdot \\\\mid c_1), p(\\\\cdot \\\\mid c_2))}{2 \\\\varepsilon^{k+2}} \\\\boldsymbol{1}_{|c_1-c_2| \\\\leqslant \\\\varepsilon} \\\\mathrm{~d} c_1 \\\\mathrm{d} c_2 \\\\quad \\\\text{for} \\\\quad p\\\\colon\\\\Omega\\\\ni c\\\\longmapsto p(\\\\cdot \\\\mid c)\\\\in\\\\mathcal{P}(D),\\n$$\\n\\nwhere $ k $ is the dimension of the condition space $ \\\\Omega $. For the precise meaning of the limit $ \\\\text{``}\\\\lim_{\\\\varepsilon\\\\to0}\\\\text{''} $ and the value of the constant $ C_k $, please refer to [\\u00a71.3, Lavenant, 2019].\\n\\nThus, **minimizing the Dirichlet energy implies that the sensitivity with respect to the condition $ c $ is not too large.**\\n\\nAlthough there are infinite ways of extrapolation, it is reasonable to assume an inductive bias that the sensitivity of data in nature (e.g., molecules) to conditions (e.g., chemical properties) is not unnaturally large. Therefore, our method addresses extrapolation by learning a model such that the data to be extrapolated follows this inductive bias of low sensitivity. We would like to note that this kind of inductive bias has been used throughout the history of generative models as a method to prevent overfitting and stabilize generative models; see, for example, Miyato et al. in ICLR, 2018.\\n\\nOur experiments in \\u00a77 demonstrate that EFM, which minimizes the Dirichlet energy, outperforms methods that do not minimize this energy (such as FM and COT-FM) in terms of generation performance.\\n\\nIn addition, the cost (objective) function used in our multi-marginal optimal transport (MMOT) approach provides an upper bound on the Dirichlet energy; please refer to lines 233-236 and Table 1. Therefore, optimizing the transport plan $ \\\\pi $ through the MMOT approach also minimizes the Dirichlet energy, which in turn reduces the sensitivity of the generated distribution $ p(x \\\\mid c) $ with respect to the conditioning vector $ c $.\\n\\n---\\n\\n## Scalability of EFMs\\n\\nBecause we mentioned the complexity of MMOT in the manuscript, many reviewers were concerned about the scalability of the proposed method. **The new experiment we conducted in \\u00a77.2 to generate images with continuous conditions is expected to dispel this concern.**\\n\\nIn \\u00a77.2 of the revised manuscript, we included the conditional generation of the MNIST dataset with a single ImageNet background, where the two conditions are \\\"color\\\" and \\\"rotation,\\\" which form a 4-dimensional condition space $ \\\\Omega=[0, 1]^4 $ (the first dimension is rotation, and the rest are RGB). For training, we used a conditional dataset corresponding to 12 uniform random samples of $ \\\\Omega $. We evaluated the Wasserstein distance from the ground truth (GT) conditional distributions in a manner similar to Figure 4. Note that these conditional distributions are nonlinear in the presence of background. We compared our method with the classifier-free guidance method (Zheng et al. [1]) with different guidance strengths. On this dataset, we see that the Wasserstein error increases monotonically with the distance of the target conditions from the training set of conditions (for example, $ c=[0, 0, 0, 0] $ is very far from the training conditions, and it is much more difficult to realize).\\n\\nPlease note that we had to make color and rotation the choice of conditions in this experiment because the default conditional labels of the image benchmarks, such as MNIST/CIFAR10, are (1) discrete, and (2) there is no impartial metric on the space of label conditions. For example, MNIST is labeled with digits from 0 to 9, but in terms of image generation, 0 is no closer to 1 than it is to 9. The one-hot vector embedding is also not very reflective of actual image generation because 9 is actually much closer to 4 in shape than it is to 3. This is the reason why we limited our experiments to datasets like ZINC-250k of the form $ (x, c(x)) $, where $ c(x) $ is a feature of $ x $.\\n\\n---\\n\\nThe above explanations have been added in purple highlights in the revised manuscript. We hope that these sections will make the novelty and potential of our research clearer.\"}", "{\"title\": \"Thank you very much for the followup!\", \"comment\": \"Thank you very much for the follow-up. We included in the section 7.2 of the revised manuscript the conditional generation of the MNIST dataset with a single image-net background, where the two conditions are \\u201ccolor\\u201d and \\u201crotation\\u201d that constitute a 4-dimensional condition-space $\\\\Omega=[0, 1]^4$(first dimension is the rotation, and the rests are RGB) . For the training, we used a conditional dataset corresponding to 12 uniform random samples from $\\\\Omega$, and we evaluated the Wasserstein distance from the ground truth (GT) conditional distributions in a manner similar to Figure 4. Note that these conditionings are nonlinear in the presence of background. We compared our method against the classifier-free guidance method (Zheng et al. [1]) with different guidance strengths. On this dataset, we see that the Wasserstein error increases monotonically with the distance of the target conditions from the training set of conditions (For example, c=[0, 0, 0, 0] is very far from the training conditions and it is that much difficult to realize.)\\n\\n\\nPlease note that we had to make color and rotation the choice of conditions in this experiment because the default conditional labels of the image benchmarks such as MNIST/CIFAR10 are (1) discrete, and (2) there is no impartial metric on the space of label conditions. \\nFor example, MNISTS are labeled with digits from 0 to 9, but in terms of image generation, 0 is not closer to 1 than it is to 9. The one-hot vector embedding is also not too reflective of actual image generation because 9 is indeed much closer to 4 in terms of a shape than it is to 3. This is the reason why we limited our experiments to the dataset like ZINC-250k of the form $(x, c(x))$, where $c(x)$ is a feature of $x$. While there are many datasets of this form in application, for example, in applied fields of science such as economics, biochemistry, and physics, the benchmark datasets/models/publicized architecture are difficult to obtain for these fields.\\n\\n[1] Qinqing Zheng and Matt Le and Neta Shaul and Yaron Lipman and Aditya Grover and Ricky T. Q. Chen, Guided Flows for Generative Modeling and Decision Making, 2024\"}", "{\"comment\": \"Thank you for reconsidering our score. We would like to address the follow-up concerns.\\n\\n> First the use of the autoencoder makes it essentially low dimensional and does not show scalability. \\n\\nWe would like to highlight that many generative models perform flow-based modeling in low-dimensional latent spaces. For instance, the renowned Stable Diffusion model operates in a latent space with dimensions as low as 16 x 16 x 8 (LDM-16) [1], and large-scale chemical research utilizes a latent space of dimension 512 [2]. \\n\\n\\nThe success of modern generative models often hinges on the careful design of latent spaces and the use of pre-trained networks. Engineering these components is crucial for achieving high-quality results, as demonstrated by the performance of models like Stable Diffusion across various domains.\\n\\n\\nTherefore, while the dimensionality of the latent space is a complexity of MMOT, **it is the combination of advanced latent space design and robust network architectures that drive the scalability and effectiveness of generative models.** We believe our approach aligns with these principles and shows potential for scalability despite using an autoencoder.\\n\\n\\n[1] Rombach et al., High-Resolution Image Synthesis with Latent Diffusion Models, 2022 \\n[2] Kaufman et al., latent diffusion for conditional generation of molecules, 2024\\n\\n\\n> this example is not really standard which makes the interpretation of the results not easily understandable (i.e., how it performs relatively to other methods). \\n\\nAs mentioned in our previous rebuttal, the provided example is non-standard because we label the images with continuous conditions of rotation and color change rather than standard categorical conditions. This domain of \\\"continuous conditions\\\" is where EFM is designed to excel.\\n\\nHowever, **it is possible to interpret the results and compare our method's performance with other methods.** In the MNIST experiment, constructing an inverse problem that can take any distribution in OOD is not natural. In this rational experimental setting, the bottom figure in Figure 5 shows that our method has a relatively small worst-case generalization error even in OOD situations with large distances from the training condition.\\n\\n\\n > Btw, what is guided1.0, 1.1 and 1.2?\\n\\nWe apologize for not clearly explaining this in the revision. In the added experimental results, the values following the label \\\"guided\\\" represent the guidance strength parameter in the classifier-free guidance method.\"}", "{\"comment\": \"We appreciate your comments. In response to your comments, we have added a new conditional image generation experiment. This experiment clearly demonstrates the scalability of our EFM. For details on the experiment settings, etc., please see the global comment and revision.\\n\\nYour feedback is very valuable. The review deadline is approaching, so we would be grateful if you could provide your comments and reconsider your rating as soon as possible.\"}", "{\"title\": \"Rebuttal by authors (Part 1)\", \"comment\": \"We appreciate your valuable feedback and apologize for any lack of clarity in our initial explanation. Below, we address your concerns and questions:\\n\\n# For Weaknesses\\n\\n> * While the motivation of EFM was to provide ensure that the learned network $u(x,t,c)$ is smooth with respect to the conditioning vector $c$, the authors do not address how imposing smoothness can allow extrapolation to conditioning vectors not seen during training.\\n> * Could the authors explain why the multi-marginal optimal transport approach allows for extrapolating to conditioning vectors not seen during training?\\n\\nWe apologize for the unclear wording in line 86. First, we would like to clarify the meaning of \\\"smoothness,\\\" as mentioned in our motivation. Our goal is to ensure that the conditional distribution $p(x \\\\mid c)$ which we will generate is \\\"smooth,\\\" which means minimizing the Dirichlet energy as defined in Equation (3.2). Intuitively, the Dirichlet energy represents the sensitivity of the distribution $p(x \\\\mid c)$ with respect to the conditioning vector $c$. Thus, minimizing the Dirichlet energy implies that the sensitivity with respect to the condition $c$ is not too large.\\n\\nAlthough there are infinite ways of extrapolation, it is reasonable to assume an inductive bias that the sensitivity of data in nature (e.g., molecules) to conditions (e.g., chemical properties) is not unnaturally large. Therefore, our method addresses extrapolation by learning a model such that the data to be extrapolated follows this inductive bias of low sensitivity.\\n\\nWe would like to note that this kind of inductive bias has been used throughout the history of generative models as a method to prevent overfitting and a method to stabilize generative models; see, for example, [Miyato et al. in ICLR, 2018](https://openreview.net/forum?id=B1QRgziT-).\\n\\nOur experiments in \\u00a77 demonstrate that EFM, which minimizes the Dirichlet energy, outperforms methods that do not minimize this energy (such as FM and COT-FM) in terms of generation performance.\\n\\nIn addition, the cost (objective) function used in our multi-marginal optimal transport (MMOT) approach provides an upper bound on the Dirichlet energy; please refer to lines 233-236 and Table 1. Therefore, optimizing the transport plan $\\\\pi$ through the MMOT approach also minimizes the Dirichlet energy, which in turn reduces the sensitivity of the generated distribution $p(x \\\\mid c)$ with respect to the conditioning vector $c$.\"}", "{\"summary\": \"The authors propose extended flow matching (EFM) for conditional sampling and style transfer using flow matching. EFM consists of\\n\\n1. learning a field which also uses the conditioning vector $c$ as input, which the authors call a matrix field. \\n2. The authors then integrate the learned field $u(x, t, c)$ along different paths $\\\\gamma: [0, t] \\\\rightarrow [0, 1] \\\\times C$, where $C$ is the set of conditioning vectors. \\n 1. For instance, for conditional generation the authors propose integrating along the path $\\\\gamma(t) = (t, c)$, which reduces to conditional flow matching. \\n 2. For style transfer, the authors integrate along the path $\\\\gamma(t) = (1, (1-t) c_1 + t c_2)$. Since integrating along $\\\\gamma(t)$ can be out of domain for models learned trained just on pairs $x, c \\\\sim p(x, c)$, the authors propose a learning algorithm such that the field $u_\\\\theta$ also observes such paths during training. \\n\\nThe authors propose learning such a field $u$ using optimal transport:\\n\\n1. the authors propose learning an optimal plan similar to [Lipman et al 2023]\\n2. instead of using linear interpolation between different points on a path, the authors extend the set of paths to include functions belonging to an RKHS.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The authors identify an interesting problem: observing the conditioning vector in a number of domains can be hard or expensive. The proposal of integrating along paths between different marginals is also interesting, a similar proposal is studied in [Albergo et al 2023].\", \"weaknesses\": \"1. While the motivation of EFM was to provide ensure that the learned network $u(x, t, c)$ is smooth with respect to the conditioning vector $c$, the authors do not address how imposing smoothness can allow extrapolation to conditioning vectors not seen during training.\\n2. Could the authors explain why the multi-marginal optimal transport approach allows for extrapolating to conditioning vectors not seen during training?\\n3. The authors should also consider including other works that learn multi-marginal flow models? For instance, [Albergo et al 2023] propose learning multi-marginal flows and present a learning algorithm for optimizing the paths such that the transport cost in $W_2$ metric is minimized. \\n4. [Albergo et al 2023] also propose a much more general algorithm for including paths between samples from an arbitrary number of marginal distributions, available during training. \\n5. The experiments section can be improved by adding extra text explaining the results and the figures, particularly in figure 4.\\n\\n\\n[Albergo et al 2023] Albergo, M.S., Boffi, N.M., Lindsey, M. and Vanden-Eijnden, E., 2023. Multimarginal generative modeling with stochastic interpolants.\\u00a0arXiv preprint arXiv:2310.03695.\", \"questions\": \"1. Can the authors consider providing definitions before introducing a new notation in the text?\\n2. What is the effect of defining $\\\\pi$ using plans built using batched samples? Would the vector/matrix field learned change as a function of the batch size? \\n3. What kernels do the author use for the RKHS used to construct paths?\\n4. In lines 212-214 and lines 220-222, can the authors clarify the output of $u$?\\n5. the discussion about the weak assumption of measurability and continuity of $p(x|c)$ with respect to $c$ requires clarification, particularly since piece-wise continuous functions are measurable as well.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply from authors (1/2)\", \"comment\": \"Thank you for your response. We would like to clarify our position further.\\n\\n> So in effect, the authors are not able to upper bound the Dirichlet energy with their objective?\", \"unbiased_estimators_of_ot_plans_using_finite_samples_or_mini_batches_are_discussed_in_the_following_paper\": \"- Fatras et al., Learning with minibatch Wasserstein: asymptotic and gradient properties. AISTATS 2020.\\n\\nIt would also be possible to achieve the upper bound of our Dirichlet energy by constructing an estimator in the same manner as they did. In training generative models, we believe in constructing a similar estimator approximately using stochastic gradients.\\n\\n> Their method is applicable to marginals distributions with continuous-valued support, see definition 1.\\n\\nFirstly, let us clarify the terminology. In [Definition 1, Albergo et al., 2024], the term \\\"support\\\" refers to the set of data points $x$. In both our paper and Albergo et al.'s paper, $x$ takes continuous values.\\nYou might be referring to $\\\\Delta^K$ in [Definition 1, Albergo et al., 2024] as the \\\"support.\\\" While it is true that $\\\\alpha \\\\in \\\\Delta^K$ takes continuous values, $\\\\alpha$ merely represents a probability vector over a discrete set, making it essentially discrete.\\nIn contrast, in our setting, the condition vector $c \\\\in \\\\Omega$ not only takes continuous values but can also represent quantities that vary continuously, such as color or angle. This allows for a more flexible representation of continuously varying conditions.\\n\\n> This is incorrect. In inverse problems, one can define a matrix $ A $ and observations $ y = Ax + \\\\varepsilon $, where $ x $ is the image and $ \\\\epsilon $ is mean zero noise, More typically, one can always consider generating labels $ y = g(x) $ for a deterministic function $ g $.\\n\\nWhen considering the relationship $y = Ax + \\\\varepsilon$ for an image $x$ and a categorical label (digit) $y$, if the image $x$ changes continuously, the label $y$ would also change continuously. However, since $y$ is a categorical variable, it is unnatural for it to change continuously.\\nIn the context of categorical labels, such as digits, the assumption of a continuous relationship does not hold. This is why we believe there is no impartial metric on the conditions of digits when considering a conditional distribution of images, especially because there are no ground-truth datasets other than those used in training (0, ..., 9).\\n\\n> Did the authors use a pre-trained autoencoder? What model class, were any pre-trained models used, what batch size, the cost of running mmot, how many iterations.\\n\\nFor the image experiment, we used [``Encoder_ResNet_AE_CIFAR``](https://pythae.readthedocs.io/en/latest/models/nn/cifar/resnets.html)\\n of the Pythae library, fine-tuned on the training distributions over 40 epochs with batch size 128. \\nThe general cost of Sinkhorn MMOT is $O(n^m)$, where $n$ is the batch size and $m$ is the number of marginals. Indeed, with this scaling, setting $m= |C_{\\\\mathrm{train}}|$ is prohibitive, where $C_{\\\\mathrm{train}}$ is the set of all conditions we use in training.\\nWe also note that this affects memory availability because storing complexity of $O(n^m)$ is infeasible for large $n$.\"}", "{\"summary\": \"This paper introduces extended flow matching a new flow matching based method, that is designed for conditional generation. For this, the authors make use of the generalized continuity equation by Lavenant. The authors show that their proposed loss indeed has the correct gradients, i.e., regresses onto the true velocity field of the generalized continuity equation. The algorithm consists \\\"learning\\\" an interpolation via kernel regression (which is needed since \\\"straight paths\\\" are not the only viable solution anymore), and then regressing onto a flow matching loss where the is now matrix-valued. This is a generalization of the usual inverse problems framework of flow matching. Further, the authors showcase the effiacy of their algorithm via a toy example and conditional molecular generation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I find the motivation very clear: Sometimes we already know the posteriors for several conditions (for instance in molecular dynamics, where we obtain some posterior samples via MCMC), and want to \\\"smartly\\\" interpolate between the conditions, i.e., learn a generative model which walks along \\\"generalized \\\"geodesics in the space of probability measures. I also like that the authors were very rigorous in their theorems and motivation for the developed algorithm.\", \"weaknesses\": \"However, the glaring weakness is that there is not clear cut numerical use case shown. I would like to see a not toyish example where we actually need several conditions and the transport between them. Usually, in the classical inverse problems works there is an implicit geodesic path taken where $y_t = t y + (1-t)y$, since one does not need to alter the condition if posterior sampling is the ultimate goal. If one wants to do style transfer (which seems to be the second motivation of this paper), then one can simply use a conditional FM network which receives the two conditions (source and target) as inputs. Therefore, while theoretically neat I am not convinced of why the generalized continuity equation and a network which moves efficiently also in the condition space, is advantageous. The authors can convince me by providing a clear example where either i) the classical conditional algorithms are not applicable or ii) this approach significantly outperforms the other flow matching models.\\n\\nI also have some smaller concerns. \\n\\n1) The scaling in $N_c$ and condition dimension seems to be bad. can you provide the run times for the molecular example also for the baselines? it only says in the appendix that they were completed within 4 hours, but I expect the baselines to train much quicker. Also latent space of a VAE is pretty low dimensional. Please provide training your conditional flow matching model on MNIST (no VAEs..), where the condition space is not discrete (i.e., for instance inpainting). Even if this does not fit your motivation, I would like to see the results in such a more standard example and this would improve my confidence in the scalability. \\n\\n2) Appendix D5 and F are empty (or almost empty). \\n\\n3) you do not seem to provide any code. I find the algorithm description to be not perfectly clear, there I would very strongly suggest that you at least publish code for the toy example. \\n\\n4) I believe that the example 7.1 is meaningless. You construct a random example with sparse conditions. Then you show, that your algorithm performs better on the OOD. But basically you can construct an inverse problem which aligns with your in distribution posteriors and does anything else on the OOD data. Of course I am aware that your point is that your algorithm is minimizing the Dirichlet energy and you measure the distribution induced by this. However, it is not clear to me if this is the theoretically optimal thing to do (wrt to Wasserstein). I am guessing that your algorithm computes something like Wasserstein barycenters weighted by some distance to the known conditions? Please clarify why the minimization of the generalized Dirichlet energy should yield theoretically sound posteriors. \\n\\n5) The manuscript is sloppy at times when discussing related work. \\\"The authors in (Wildberger et al., 2023; Atanackovic et al., 2024) developed FM-based models to estimate the posterior distribution when the prior distribution p(c) of conditions is known. In contrast, our approach tackles situations where the conditions can only be sparsely observed, and the prior distribution is unknown.\\\"\\n\\nThe prior distribution p(c) is not known in (Wildberger et al, 2023). They are only able to sample from the joint distributions (c,x), but this does not mean that you can evaluate it. Further, their algorithm can very easily be adapted to the setting you described. If one has posterior samples for sparse conditions $c_i$ one can simply do the joint training over $(x_{i,j}, c_i)$.\\n\\n6) when style transfer is one of the main modes of motivation, I would also like to see an example of it. \\n\\nOverall, I appreciate the idea and think that it has merits, but the execution prevents me from accepting it in the current form. I would love to see a practical example, where the main motivation of your algorithm becomes clear. Furthermore, providing a more standard inverse problem on MNIST (with no encoder/decoder) and a continuous condition space would show me that your algorithm at least somewhat scales. If these problems are discussed/solved, then I am willing to raise my score.\", \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by authors (Part 2)\", \"comment\": \"> * I believe that the example 7.1 is meaningless. You construct a random example with sparse conditions. Then you show, that your algorithm performs better on the OOD. But basically you can construct an inverse problem which aligns with your in distribution posteriors and does anything else on the OOD data. Of course I am aware that your point is that your algorithm is minimizing the Dirichlet energy and you measure the distribution induced by this. However, it is not clear to me if this is the theoretically optimal thing to do (wrt to Wasserstein). I am guessing that your algorithm computes something like Wasserstein barycenters weighted by some distance to the known conditions? Please clarify why the minimization of the generalized Dirichlet energy should yield theoretically sound posteriors.\\n\\nWe apologize for the lack of background information on Dirichlet energy. The Dirichlet energy represents how sensitive the distribution $p(\\\\cdot \\\\mid c)$ is to perturbations in $c$ with respect to the Wasserstein distance $W_2$. In fact, the following holds:\\n\\n$$\\\\operatorname{Dir}(p)= \\\\text{``}\\\\lim_{\\\\varepsilon\\\\to0}\\\\text{''} C_k \\\\iint_{\\\\Omega \\\\times \\\\Omega} \\\\frac{W_2^2(p(\\\\cdot \\\\mid c_1), p(\\\\cdot \\\\mid c_2))}{2 \\\\varepsilon^{k+2}} \\\\boldsymbol{1}_{|c_1-c_2| \\\\leqslant \\\\varepsilon} \\\\mathrm{~d} c_1 \\\\mathrm{d} c_2\\\\quad\\\\text{ for }\\\\quad p\\\\colon\\\\Omega\\\\ni c\\\\longmapsto p(\\\\cdot \\\\mid c)\\\\in\\\\mathcal{P}(D),$$\\n\\nwhere $k$ is the dimension of the condition space $\\\\Omega$. For the precise meaning of the limit $\\\\text{``}\\\\lim_{\\\\varepsilon\\\\to0}\\\\text{''}$ and the value of the constant $C_k$, please refer to [\\u00a71.3, Lavenant, 2019].\\n\\nThus, minimizing the Dirichlet energy is equivalent to yielding posteriors that are not unnaturally sensitive in the Wasserstein sense. The experiment in \\u00a77.1 verifies this effect. Specifically, when generating the distribution of unobserved conditions using COT-FM, the distribution changes significantly from $c=(0.25,0)$ to $c=(0.5,0)$. In contrast, using MMOT-EFM reduces the extent of this change.\\n\\n> * The manuscript is sloppy at times when discussing related work. \\\"The authors in (Wildberger et al., 2023; Atanackovic et al., 2024) developed FM-based models to estimate the posterior distribution when the prior distribution p(c) of conditions is known. In contrast, our approach tackles situations where the conditions can only be sparsely observed, and the prior distribution is unknown.\\\" The prior distribution p(c) is not known in (Wildberger et al, 2023). They are only able to sample from the joint distributions (c,x), but this does not mean that you can evaluate it. Further, their algorithm can very easily be adapted to the setting you described. If one has posterior samples for sparse conditions ci one can simply do the joint training over (xi,j,ci).\\n\\nWe apologize for the misunderstanding regarding the work of Wildberger et al., 2023. The contribution of the authors lies in their focus on the fact that the objective function of the FM for conditional generation only requires joint samples $(c,x)$, and they generated these joint samples in a Bayesian manner, such as $x \\\\sim p(x)$ and $c \\\\sim p(c \\\\mid x)$. \\n\\nHowever, the treatment of conditions $c$, aside from the aforementioned joint sampling, is almost identical to (OT-C)FM as described in \\u00a77. Therefore, if we apply their results to our setup, we can expect similar experimental outcomes to those of (OT-C)FM.\\n\\nWe have clarified this in the revised manuscript.\"}", "{\"title\": \"Rebuttal by authors (Part 2)\", \"comment\": \"> * The authors should also consider including other works that learn multi-marginal flow models? For instance, [Albergo et al 2023] propose learning multi-marginal flows and present a learning algorithm for optimizing the paths such that the transport cost in W2 metric is minimized.\\n> * [Albergo et al 2023] also propose a much more general algorithm for including paths between samples from an arbitrary number of marginal distributions, available during training.\\n\\nIn fact, our proposed EFM framework indeed incorporates the approach of [Albergo et al., 2023].\\n\\nMore specifically, one of the implementations of EFM, Geo-EFM (supplemented in Appendix F), and [Albergo et al., 2023] both use the same transport plan $\\\\pi$ to learn the matrix (or vector) field. In addition, our interpolator $\\\\bar\\\\psi(c \\\\mid (x_i)_i)$ in Table 1 corresponds to the barycentric stochastic interpolant $x(\\\\alpha) = \\\\sum_i x_i \\\\alpha_i$ in [Equation (5), Albergo et al., 2023]. Here, the interpolation coordinates $\\\\alpha = (\\\\alpha_i)_i \\\\in \\\\Delta^K$ valued in the simplex $\\\\Delta^K$, can be roughly regarded as the condition vector $c \\\\in \\\\Omega$ in our case, i.e., $\\\\Delta^K \\\\approx \\\\Omega$. More precisely, the condition $c$ is given by expanding it on a basis. That is, $\\\\alpha$ is the coefficient when expanding $c$ as $c = \\\\sum_i c_i \\\\alpha_i$ in a basis $(c_i)_i$. Consequently, Geo-EFM, similar to [Albergo et al., 2023], can utilize an arbitrary number of marginal distributions during the training process.\\n\\nThe only difference between Geo-EFM and the method in [Albergo et al., 2023] is that, after learning the vector field, Albergo et al. optimize the path on the space of \\\"conditions\\\" $\\\\alpha \\\\colon [0,1] \\\\to \\\\Delta^K \\\\approx \\\\Omega$ such that the transport cost in the W2 metric is minimized. Thus, we can take the same procedure as above to optimize $\\\\gamma \\\\colon [0,1] \\\\to \\\\Omega$ with the Geo-EFM setting.\\n\\nOur MMOT-EFM is novel in that it minimizes the transport cost in a complementary way to the optimization of $\\\\gamma$. MMOT-EFM trains a matrix field, which is an extension of a vector field, to also minimize a generalization of the transport cost called Dirichlet energy. This makes it possible to learn a model that transports optimally with only one training of the model without optimizing $\\\\gamma$. We note that there is a computational limitation on the number of marginal distributions (as you pointed out) due to the use of MMOT during training.\\n\\nMoreover, in our setting, the set of conditions $\\\\Omega$ is continuous, whereas in [Albergo et al., 2023], it is discrete.\\n\\nIn summary, MMOT-EFM and [Albergo et al., 2023] can be seen as complementary approaches. We have mentioned the above in the revision.\\n\\n> * The experiments section can be improved by adding extra text explaining the results and the figures, particularly in figure 4.\\n\\nWe apologize for the inconvenience. Due to page limitations, we could not include sufficient explanation in the submitted manuscript. We have added more explanations to the caption in the revision.\\n\\n# For Questions\\n\\n> * Can the authors consider providing definitions before introducing a new notation in the text?\\n\\nWe apologize for the technical nature of the mathematical equations. We would like to consider your comments on the manuscript. Could you tell us exactly where we introduce the new notation before the definition?\\n\\n> * What is the effect of defining $\\\\pi$ using plans built using batched samples? Would the vector/matrix field learned change as a function of the batch size?\\n\\nLarger batch sizes tend to stabilize the learning process because the changes in the matching $\\\\pi$ per iteration become smaller.\\n\\n> * What kernels do the authors use for the RKHS used to construct paths?\\n\\nIn general, one can employ any nonlinear kernel.\\nIn the implementation of this paper, we use the linear kernel.\\n\\n> * In lines 212-214 and lines 220-222, can the authors clarify the output of $u$?\\n\\nHere, $u$ returns a matrix of size $d \\\\times (1+k)$, where $d$ is the dimension of the data $x \\\\in D$ and $k$ is the dimension of the condition $c \\\\in \\\\Omega$. In general, the output of $u$ is of size $d \\\\times \\\\operatorname{dim} \\\\Xi$ (see Proposition 3.1). In \\u00a73.1, $\\\\Xi = I \\\\times \\\\Omega$, so $\\\\operatorname{dim} \\\\Xi = 1+k$.\\n\\n> * The discussion about the weak assumption of measurability and continuity of $p(x \\\\mid c)$ with respect to $c$ requires clarification, particularly since piece-wise continuous functions are measurable as well.\\n\\nWe apologize for the confusion. Our intention was not clearly conveyed. In COT/Bayesian-FM, the conditional distribution $p(x \\\\mid c)$ is assumed to be measurable with respect to $c$, which allows $p(x \\\\mid c)$ to change discontinuously. In contrast, we assume that $p(x \\\\mid c)$ changes continuously, or more precisely, that the sensitivity of $p(x \\\\mid c)$ with respect to the condition $c$ is small. This difference is demonstrated in \\u00a77.1, Figure 4(b).\\n\\nWe have included this clarification in the revision.\"}", "{\"title\": \"Reminder from authors\", \"comment\": \"We hope this message reaches you well. We are writing to remind you that the deadline for submitting comments on our ICLR 2025 rebuttal is approaching.\\n\\nWe have revised our paper and added an experiment on MNIST; please see the global comment. We believe that these results will address your concerns.\\n\\nYour feedback is very valuable to us, so if you have time, we would be grateful if you could provide your comments and reconsider your rating.\"}", "{\"comment\": \"> It is known that the finite sample approximation converges asymptotically.\\nCan the authors clarify where Theorem 5.10 in Villani, 2009 discusses the convergence of finite-sample approximations? \\n\\n> We also conjecture that the multi-marginal case in line 276 converges in a similar way\\nSince this conjecture and the finite sample convergence in the coupling of two distributions are an important part of the paper, can the authors provide citations which discuss convergence properties or proofs?\\n\\n> Our approach does not require the optimization of $\\\\gamma$ for every interpolation.\\n\\nOptimizing the interpolant in step 3 of [Albergo et al 2024] is significantly cheaper than solving A for EFM. Moreover, [Albergo et al 2024] are able to scale their experiments up to 64 x 64 x 3 dimensions, significantly higher than the . \\n\\n> The difficulty in providing the efficacy of our method on image benchmarks like MNIST/CIFAR10 is that the conditions in these datasets are (1) discrete and (2) there is no impartial metric on the space of conditions\\n\\nThere are plenty of high-dimensional posterior sampling problems that have been studied with diffusion models, with several metrics such structural similarity, perceptual metrics such LPIPS, MAE, MSE, etc. Albeit they are imperfect but there are several benchmarks for these tasks. Also, can the authors explain what they mean by an impartial metric?\\n\\nCan the authors provide runtime on solving the finite-sample optimal transport for data of size ~ 12888 with a batch size of 128? More generally, for a single batch what is the run time for all the steps in section A as identified by the authors. \\n\\nThank you for adding the MNIST digits on CIFAR10 background experiment, can the authors provide details on the baselines as well as the metrics used. To the best of my understanding, $W_1$ is also an integral probability metric which requires solving a high-dimensional optimization over Lipschitz functions with Lipschitz constant less than 1.\"}", "{\"comment\": \"We appreciate your comments.\\n\\nIn our previous response, we addressed your concerns about the molecular generation experiment. In addition, we have performed an additional conditional image generation experiment to further clarify the usefulness of our method; please see the global comment and revision.\\n\\nYour feedback is very valuable. As the peer review deadline is approaching, we would be grateful if you could provide your comments and reconsider your rating as soon as possible.\"}", "{\"title\": \"Reply from authors (2/2)\", \"comment\": \"> Can the authors comment on why they have not done even a MNIST generation experiment.\\n\\nThe reason why we did not present the MNIST generation experiment with *digit conditions* is that there is no *continuous* dependence on the digits in the coupling between the conditional distributions.\\nThe essential purpose of EFM is to generate a *good coupling sample* of $\\\\lbrace x_c \\\\mid c \\\\in \\\\Omega\\\\rbrace$ under the constraint that $x_c$ changes \\u201csmoothly\\u201d when $c$ is varied over $\\\\Omega$. Recall that the purpose of this was to generate and interpolate/extrapolate the values of $\\\\mu_c$ when $c$ is not in $C_{\\\\mathrm{train}}$. Therefore, we have chosen an experimental setup that has dependencies on $c$.\\nIf the problem is to generate $x_c \\\\sim \\\\mu_c$ for each $c$ independently for each $c \\\\in C_{\\\\mathrm{train}}$ (e.g., MNIST with digit conditions), we do not need to consider the *joint* distribution, and it makes no difference whether we use MMOT-EFM or Geo-EFM, or even the EFM with random $\\\\pi$ couplings. In particular, when it comes to generating only the 'marginal distribution' for $c \\\\in C_{\\\\mathrm{train}}$, EFM and [Albergo et al.] are theoretically the same as the original OT-CFM with conditions. \\n\\n> it would be useful if the authors can comment on what limitations the method faces when scaling to dimensions > 100.\\n \\nThe effect of *dimension* on the complexity of EFM is indirect. In general, the computational cost of MMOT/OT for $d$ dimensional particles scales linearly with *d*. In contrast, it scales with $n^m$, where $n$ is the number of particles and $m$ is the number of marginal distributions. \\nAt the same time, the larger the dimension $d$, the slower the *batch* sample converges to the true distribution; in particular, the expected Wasserstein distance from the empirical distribution to the true distribution scales with $O(n^{-1/d})$ when measured with the $W_1$ distance [3]. Thus, the larger the $d$, the larger the number $n$ of particles it would take to approximate the MMOT/OT with a given precision in terms of the *transport* cost.\\n\\n[3] Fournier, and Guillin. \\\"On the rate of convergence in Wasserstein distance of the empirical measure.\\\" Probability theory and related fields, 2015.\\n\\n\\n> Even assuming that they have enough model capacity to learn a vector (or matrix) field, what are the implications of using finite-sample approximations of the transport plan?\\n\\nThe implications of using finite-sample approximations are twofold.\\n(1) The quality of the approximated marginal distributions is affected by the batch size. \\n(2) The quality of the approximated coupling (joint distribution) is affected by the batch size. \\n\\nThe first part applies to any batch-based flow-matching method in general because *batches* are used in the flow matching as the approximation of marginal distributions, which converges to the ground-truth distribution. This also applies to the optimal transport cost and, most likely, to Dirichlet energy.\\nAs stated above, the larger the $d$, the greater the size of the batches it would require to obtain a matrix field with the lowest *transport* energy.\\n\\nWe believe that our response above clarifies the limitations of the method and the impact of mini-batch OT, which are your concerns.\"}", "{\"title\": \"Rebuttal by authors (Part 2)\", \"comment\": \"# For Overall\\n\\n> * This work presents an intriguing idea with potential to enhance understanding of how these models generalize to unseen conditions. However, this aspect is not theoretically explored. Additionally, the current method does not scale to practical settings. Further investigation into when the assumptions behind this method are valid relative to other methods would significantly strengthen this work. A deeper understanding of how this relates to prior literature and when this method is preferable would likely change my opinion of this work.\\n\\nWe appreciate your evaluation of our concept. Indeed, there are scalability issues, and our method may need further development to be applied to general tasks such as image generation. However, in the context of our motivation, such as molecular generation, we have demonstrated its advantages over existing methods in \\u00a77.2. Regarding the relation to prior literature, particularly the differences from COT-FM and Bayesian FM, we have clarified these points in our response to your previous question. Please refer to that response for a detailed explanation.\"}", "{\"comment\": \"We appreciate your interest in the EFM concept. Below, we address your concerns regarding the molecular generation experiment:\\n\\n# For Weakness\\n\\n> * I feel concerned about the experimental design. For instance, the authors introduce a rather usual setting (Appendix 1300-1306). Though it aligns well with the synthetic point cloud experiments, it is quite different from the common practice [1]\\n\\nThe experimental design is motivated by the specific objectives outlined in \\u00a71. Our method demonstrates a distinct advantage in scenarios where multiple molecules $x$ share the same label value, and the label value $c$ is sparse. Standard chemical properties, which are continuous and have fewer molecules with identical labels, are not suitable for this experiment. Consequently, we selected a scenario involving the number of bonds, where only two label values are available.\\n\\n> * I think critical experiments against highly related OT-CFM methods are missing in this version.\\n\\nThe term FM in \\u00a77 refers to the OT-CFM method. We apologize for any confusion caused by the lack of clarity. This has been clarified in the revised manuscript.\\n\\n# For Questions\\n\\n> * Could you please justify the ZINC-250k experimental design?\\n\\nZINC-250k is a widely utilized molecular database of computationally designed compounds, created by G'omez-Bombarelli et al. (2018). It is a more realistic dataset that includes drug-like, commercially available molecules, compared to QM9, which is a standard dataset for molecule generation. We anticipate that similar results would be obtained if trained on other datasets\\n\\n[[G\\u00b4omez-Bombarelli et al., ACS Central Science, 2018]](https://pubs.acs.org/doi/full/10.1021/acscentsci.7b00572)\"}", "{\"title\": \"Thanks\", \"comment\": \"I will raise my score to 5, but only to 5, since this example does not really address my concerns. First the use of the autoencoder makes it essentially low dimensional and does not show scalability, secondly this example is not really standard which makes the interpretation of the results not easily understandable (i.e., how it performs relatively to other methods). The W_1 plot again relies on OOD generalization for which my issues have not been fully resolved (although your explanation on the dirichlet energy helped) and the trend is not super clear.\\n\\nBtw, what is guided1.0, 1.1 and 1.2?\"}", "{\"comment\": \"I thank the authors for their rebuttal. While MMOT-EFM outperforms previous methods on some toy tasks, I still feel it is lacking a useful application at scale, and it is unclear to me that this method is useful outside of toy settings given its additional computational cost and complexity. For this reason I maintain my score.\"}", "{\"title\": \"Rebuttal by authors (Part 1)\", \"comment\": \"We appreciate your interest in our EFM theory. We hope that the responses to your questions below will address your concerns comprehensively.\\n\\n# For Weakness\\n\\n> * It would be beneficial to elucidate how this method extends to unseen conditions more clearly. Lines 402-405 touch on this, but further emphasis on this point would be valuable. Designing a conditional generative model that technically extends to unseen conditions is straightforward, but ensuring that the model extends in a reasonable manner is more challenging. EFM has the potential to guide this extension, and further exploration of this point would be appreciated.\\n\\nOur method, MMOT-EFM, extends to unseen conditions by ensuring that the target conditional distribution $p(x \\\\mid c)$ is \\\"smooth.\\\" Specifically, we aim to minimize the Dirichlet energy as defined in Equation (3.2). Intuitively, the Dirichlet energy quantifies the sensitivity of the distribution $p(x \\\\mid c)$ with respect to the conditioning vector $c$. Minimizing the Dirichlet energy implies that the sensitivity to the condition $c$ is not excessively large.\\n\\nWhile there are numerous methods of extrapolation, it is reasonable to assume an inductive bias that the sensitivity of natural data (e.g., molecules) to conditions (e.g., chemical properties) is not unnaturally large. Therefore, our method addresses extrapolation by training a model such that the data to be extrapolated adheres to this inductive bias of low sensitivity.\\n\\nWe would like to highlight that this type of inductive bias has been historically employed in generative models to prevent overfitting and stabilize generative processes; see, for example, [Miyato et al. in ICLR, 2018](https://openreview.net/forum?id=B1QRgziT-).\\n\\n\\nWe have revised the manuscript to further emphasize this point in \\u00a71, illustrating how MMOT-EFM guides the extension to unseen conditions in a reasonable manner.\\n\\n> * The algorithm is not yet applicable to real-world scenarios. While the authors acknowledge this, it remains a significant limitation of the work's impact. The molecule experiment is limited in terms of comparisons to existing work and the overall training setup.\\n\\nOur method has yet to be evaluated on real-world datasets. However, the ZINC-250k dataset used for conditional molecule generation includes drug-like commercially available molecules, making it more realistic compared to well-known datasets such as QM9.\\n\\n> * Much of the theoretical statements are direct extensions from prior work.\\n\\nWhile it is true that Theorem 3.4 is a direct extension of the fundamental theorem in FM, the theory developed in \\u00a73 as a whole should not be seen as a straightforward derivation from existing FM theory. For instance, Proposition 3.1, which introduces a method for utilizing the generalized continuity equation (3.1) for generation and style transfer, is a novel technique within the context of generative models.\\n\\n# For Questions\\n\\n> * When is MMOT-EFM and EFM in general expected to outperform COT-FM / Bayesian-FM? Although there is a brief explanation of the differences in assumptions, it is challenging to understand the benefits of assuming piecewise continuity of $p(x|c)$ versus measurability. How does this compare to prior works in general?\\n\\nOur EFM approach is expected to perform better when there is prior knowledge that the distribution $p(x \\\\mid c)$ we aim to generate changes continuously with respect to $c$. Specifically, it is effective when the sensitivity of $p(x \\\\mid c)$ with respect to the condition $c$, quantified by the Dirichlet energy, is known to be small.\\n\\nIn contrast, COT-FM and Bayesian-FM do not incorporate such prior knowledge regarding the continuity or sensitivity of $p(x \\\\mid c)$ with respect to $c$.\\n\\nThis distinction is evident in the generated results, as shown in Figure 4(b). When generating the distribution of unobserved conditions using COT-FM, the distribution changes significantly from $c=(0.25,0)$ to $c=(0.5,0)$. Conversely, with MMOT-EFM, the extent of this change is mitigated.\\n\\nWe have included this explanation in the revised manuscript.\"}", "{\"metareview\": \"The paper proposes a new flow matching algorithm based on a generalized (matrix-valued) continuity equation, with the motivation being unobserved data/conditions. The main claim of the work is that it outperforms existing methods in molecular generation tasks with sparsely observed conditions.\\n\\nThe main strength is the rigorous development of the proposed algorithm, and clear motivation of the problem. However, the main claim of the paper that the method outperforms existing approaches is not fully validated in the experiments, and it remains unclear whether there is practical setting in which the generalized continuity equation is useful. \\n\\nI recommend to reject the paper at this stage, and encourage the authors to take the reviewers details feedback into account for a resubmission.\", \"additional_comments_on_reviewer_discussion\": [\"Reviewer a59p pointed out that there is not really a practical setting in which the method is useful\", \"Reviewer QpJX pointed out various issues with the theory.\", \"Both were not cleared after discussions in the rebuttal phase, which influenced my decision to reject the paper.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"Flow matching can generate different data distribution given different desired property conditions. The authors proposed the extended flow matching (EFM) which introduces a continuous mapping from a joint continuous space of time and property conditions to corresponding data distribution, which enables smooth shifts between different conditions. The authors also extended optimal transport to Multi-Marginal Optimal Transport (MMOT) for multiple property conditions. They validated their method on a 2D toy model and conditional molecular generation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The theory of integrating property conditions and time in flow matching is highly innovative, and the authors developed MMOT to perform optimal transport within this space.\", \"weaknesses\": \"The experimental evidence is insufficient.\", \"questions\": \"Major:\\n\\n1.\\tCould the authors explain or give an intuition about the regression in MMOT (Eq. 3.4)?\\n\\n2.\\tCould the authors show the extrapolation ability of their methods in a more realistic application of EFM, e.g. style transfer of images?\", \"minor\": \"1.\\tAt the end of Line 311, \\u201cfocus on the\\u201d is misspelled as \\u201cfocus ton he\\u201d.\\n\\n2.\\t\\u201cConvHull\\u201d should be explained.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder from authors\", \"comment\": \"We hope this message reaches you well. We are writing to remind you that the deadline for submitting comments on our ICLR 2025 rebuttal is approaching.\\n\\nIn our previous reply, we explained the differences between [Albergo, et al. 2023] and our work, which was a concern of yours. We have also revised our paper and added an experiment on MNIST; please see the global comment.\\n\\nYour feedback is very valuable to us, so if you have time, we would be grateful if you could provide your comments and reconsider your rating.\"}", "{\"summary\": \"To achieve extrapolation beyond observed conditions, the authors proposed Extended Flow Matching (EFM) framework that is developed upon the Conditional generative modeling. Specifically, the authors introduced a novel algorithm called MMOT-EFM derived from the Multi-Marginal Optimal Transport (MMOT). In the experiments, the authors showed improved MAE over compared FM-based methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The Extended Flow Matching sounds novel and the authors show the newly introduced conditional components in Fig.1, which is quite intuitive.\\n\\n2. I like the well-structured theoretical discussion from FM to EFM, this can help domain experts grasp the main contribution and difference between the existing OT-CFM and the proposed MMOT-EFM\", \"weaknesses\": \"1. I feel concerned about the experimental design. For instance, the authors introduce a rather usual setting (Appendix 1300-1306). Though it aligns well with the synthetic point cloud experiments, it is quite different from the common practice [1].\\n\\n[1] Ketata M A, Gao N, Sommer J, et al. Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space[J]. arXiv preprint arXiv:2406.10513, 2024.\\n\\n2. I think critical experiments against highly related OT-CFM methods are missing in this version. \\n\\nAlexander Tong, Nikolay Malkin, Guillaume Huguet, Yanlei Zhang, Jarrid Rector-Brooks, Kilian\\nFatras, Guy Wolf, and Yoshua Bengio. Improving and generalizing flow-based generative models\\nwith minibatch optimal transport. arXiv preprint 2302.00482, 2023b.\", \"questions\": \"1. Could you please justify the ZINC-250k experimental design?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks\", \"comment\": \"Thank you for clarifying the methods and providing the experiment results on MNIST.\\n\\nI still think that the practical applications demonstrated in the paper are limited. The authors also mention that their method is only applicable to low-dimensional conditioning vectors, such as the molecule generation showcased in the study. However, I believe that there should be other potential applications beyond this. I will maintain the score.\"}" ] }
0PxLpVURTl
MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Masked Image Modeling Representations
[ "Benedikt Alkin", "Lukas Miklautz", "Sepp Hochreiter", "Johannes Brandstetter" ]
We introduce MIM (Masked Image Modeling)-Refiner, a contrastive learning boost for pre-trained MIM models. MIM-Refiner is motivated by the insight that strong representations within MIM models generally reside in intermediate layers. Accordingly, MIM-Refiner leverages multiple instance discrimination (ID) heads that are connected to different intermediate layers. In each head, a nearest neighbor ID objective constructs clusters that capture semantic information which improves performance on downstream tasks, including off-the-shelf and fine-tuning settings. The refinement process is short and simple - yet highly effective. Within a few epochs, we refine the features of MIM models from subpar to state-of-the-art, off-the-shelf features. Refining a ViT-H, pre-trained with data2vec 2.0 on ImageNet-1K, sets a new state-of-the-art in linear probing (84.7\%) and low-shot classification among models that are pre-trained on ImageNet-1K. MIM-Refiner efficiently combines the advantages of MIM and ID objectives, enabling scaling ID objectives to billion parameter models using relatively little compute. MIM-Refiner compares favorably against previous state-of-the-art SSL models on various benchmarks such as low-shot classification, long-tailed classification and semantic segmentation.
[ "self-supervised learning", "masked image modeling", "instance discrimination", "computer vision", "contrastive learning" ]
Accept (Poster)
https://openreview.net/pdf?id=0PxLpVURTl
https://openreview.net/forum?id=0PxLpVURTl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uD8jLXOY1Z", "nb4W5Qy47I", "hiyCoIvps9", "hLftjI1mTa", "Y5sxSBLlpQ", "QLBisF4bfd", "Q5cMo8sAQF", "OZUkCxG9Ns", "M13rnnitAQ", "IYZSLzuqsj", "9ByCfGLf5L", "8e1gvrCaDd", "72jri0pWLM", "5js0nzGIqN", "4HVAm6vZgD" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731632185674, 1731632146050, 1730523774637, 1731632913101, 1731632618483, 1730727427706, 1731633078544, 1730605944612, 1734442871731, 1731631769758, 1730601084258, 1737523544006, 1731975194298, 1731631385755, 1731974994552 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2950/Authors" ], [ "ICLR.cc/2025/Conference/Submission2950/Authors" ], [ "ICLR.cc/2025/Conference/Submission2950/Reviewer_JDMo" ], [ "ICLR.cc/2025/Conference/Submission2950/Authors" ], [ "ICLR.cc/2025/Conference/Submission2950/Authors" ], [ "ICLR.cc/2025/Conference/Submission2950/Reviewer_Auz1" ], [ "ICLR.cc/2025/Conference/Submission2950/Authors" ], [ "ICLR.cc/2025/Conference/Submission2950/Reviewer_BbVs" ], [ "ICLR.cc/2025/Conference/Submission2950/Area_Chair_Xp3v" ], [ "ICLR.cc/2025/Conference/Submission2950/Authors" ], [ "ICLR.cc/2025/Conference/Submission2950/Reviewer_HaAb" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2950/Authors" ], [ "ICLR.cc/2025/Conference/Submission2950/Authors" ], [ "ICLR.cc/2025/Conference/Submission2950/Authors" ] ], "structured_content_str": [ "{\"comment\": \"**Clarification of training diagram in Figure 4**\\n\\nWe aim to depict the training pipeline of MIM-Refiner in Figure 4 by showing the differences between ID, MIM and MIM-Refiner with the goal that the \\\"copy\\\" lines should indicate that MIM-Refiner starts from a pre-trained MIM model where multiple ID heads are attached at later blocks instead of only a single ID head at the last block.\\n\\nAdditionally, due to reviewer Auz1's feedback, we added more details to the introductory paragraph of Section 4, which hopefully also clarifies the overall training methodology if it was not clear from Figure 4.\\n\\nWe regret that this was not found to illustrate our method clearly. We hope the additional information clarifies our experimental setup and otherwise would be keen to hear more details about what information is missing or was not clearly presented in the figure.\"}", "{\"comment\": \"Thank you for your review and helpful comments to help us improve the paper. We address your points individually.\\n\\n**End-to-end training of ID and MIM**\\n\\nMIM and ID objectives are quite conflicting where, roughly speaking, MIM considers every pixel/patch to be equally important, as the whole image needs to be reconstructed, while ID only cares about distinguishing positive and negatives in a batch of samples, which implicitly weights pixels/patches by their information content. This conflict also results in vastly different hyperparameter choices that are required for optimal performance. For example, MIM models use little image augmentations (only cropping and resizing but no color augmentations) but high masking ratios (75\\\\%). Contrary, ID models use sophisticated image augmentation pipelines with different augmentation strengths per view, color augmentations and multi-crop augmentation but only small masking ratios (e.g, 25\\\\% for iBOT/DINOv2). These findings were highlighted by two related works [1, 2] (as cited in Section 5.2) which independently came to the conclusion that a sequential training can effectively alleviate this conflict.\\n\\nAlso in terms of scalability, MIM models scale extremely well to large model sizes, whereas ID models require much more compute and data. As larger models typically perform better, it is desirable to develop scalable approaches. Attempts to develop end-to-end combinations of MIM and ID introduce large compute overheads due to, e.g., target networks or lower masking ratios, which heavily limits scalability. This is evident by the fact that these methods have not been trained on model scales beyond ViT-L, where the largest model of most methods is a ViT-B. Prominent examples are [3, 4, 5]. In contrast, our sequential approach can effortlessly scale up to a 2B parameter model, on a relatively small compute budget, by leveraging the compute efficiency of MIM models.\\n\\nWe also show that end-to-end MIM and ID combinations do not fully leverage the potential of the ID objective in Appendix B.15, where refining a CMAE-B model with our proposed methodology also improves performance despite the fact that an ID objective was already used in combination with the MIM objective in pre-training.\\n\\nAdditionally, MIM-Refiner models have strong off-the-shelf performances, so while MIM-Refiner requires a multi-stage pre-training, it makes downstream training much easier where simple models like a $k$-NN classifier or a linear probe reach somewhat comparable results to fully fine-tuning a model. For example, D2V2-Refined-H achieves 84.7\\\\% with a simple linear probe on ImageNet-1K classification that can be easily trained on a single GPU. Fine-tuning the same model boosts this performance by 2.1 \\\\% while requiring much more compute, necessitating multi-GPU or even multi-node training setups.\\n\\nTo summarize, the fundamental differences of MIM and ID require trade-offs in an end-to-end setting and we therefore opt for a sequential approach. While this increases the complexity of the training pipeline, multi-stage pre-training pipelines are somewhat common, and, since pre-training has to be done only once before a model can be fine-tuned (or simply evaluated) on a broad range of downstream tasks, multi-stage pre-training pipelines do not drastically decrease practicality. For example, in language modeling (e.g. [6]), it is very common to \\\"refine\\\" models after pre-training to integrate alignment with human preferences, reasoning, long context understanding, tool usage or safety guards. These multi-stage pre-training pipelines in language models are common because their benefit outweighs the added complexity. We would argue that this also holds true for MIM-Refiner, where the benefits of broader adaptability to many more use cases outweighs the additional pre-training pipeline complexity.\\n\\n\\n[1] Lehner 2023, \\\"Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget\\\" https://arxiv.org/abs/2304.10520\\n\\n[2] Jiang 2023, \\\"Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Label-Efficient Representations\\\" https://openreview.net/forum?id=jwdqNwyREyh\\n\\n[3] Huang 2022, \\\"Contrastive Masked Autoencoders are Stronger Vision Learners\\\" https://arxiv.org/abs/2207.13532\\n\\n[4] Zhou 2021, \\\"iBOT: Image BERT Pre-Training with Online Tokenizer\\\" https://arxiv.org/abs/2111.07832\\n\\n[5] Assran 2022, \\\"Masked Siamese Networks for Label-Efficient Learning\\\" https://arxiv.org/abs/2204.07141\\n\\n[6] Dubey 2024, \\\"The Llama 3 Herd of Models\\\" https://arxiv.org/abs/2407.21783\"}", "{\"summary\": \"This paper introduces MIM-Refiner, which leverages contrastive learning to boost MIM models. The proposed method is simple and has demonstrated effectiveness in few-shot image classification tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1: This paper is well-written and easy to follow.\", \"s2\": \"This paper is not the first to point out that the encoder in MIM methods partially performs image encoding and representation learning. A similar conclusion is also discussed in [A], highlighting that MIM methods using a single ViT structure tend to face this issue. The reviewer previously conducted experiments on MAE-B, showing that introducing an additional decoder can effectively alleviate this problem. This paper demonstrates that, for methods like MAE that use an asymmetric encoder-decoder architecture\\uff0c especially in larger models\\uff0c a small decoder cannot fully decouple encoding and decoding, providing academic insights.\", \"s3\": \"This paper proposes a simple and effective MIM-Refiner method, refining the later blocks of MIM models to enhance MIM representations effectively.\\n\\n[A] Context Autoencoder for Self-Supervised Representation Learning.\", \"weaknesses\": \"W1: Existing work [A] has shown that fine-tuning MIM models can enhance their representation capability (for image classification), but the improvement under full fine-tuning is minimal. Additionally, MAE has demonstrated significant transfer performance on dense prediction tasks [B] (object detection/instance segmentation). Fine-tuning MIM models with contrastive learning methods is unlikely to bring substantial improvement and may even negatively impact performance.\", \"w2\": \"Current vision foundation models, such as DINOv2, exhibit strong patch-level representation learning capabilities and combine MIM and CL. Their learned representations have shown effectiveness in tasks like image classification, pixel classification, and depth estimation. Although this paper discusses the relationship between MIM-Refiner and these models, suggesting that MIM-Refiner can build on them for further improvement, I am concerned that MIM-Refiner may degrade pixel-level representation performance for tasks like semantic segmentation or depth estimation (especially when the backbone is fixed).\\n\\n[A] Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Label Efficient Representations.\\n\\n[B] Exploring Plain Vision Transformer Backbones for Object Detection.\", \"questions\": \"see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate your helpful review, particularly the extensive look into related work and are happy to see that our paper was well received and easy to follow. We address your concerns below.\\n\\n**Conceptual differences to CAE**\\n\\nThe CAE approach is well motivated by their intuition to decouple representation learning from learning the pre-training task where they demonstrate the validity of this intuition via experimental evaluation of their proposed pre-training approach. However, they do not conduct an extensive analysis of pre-trained features or leverage any pre-trained representations, which are two key contributions of our paper. Additionally, we show in Figure 7f of the appendix that CAE also faces degrading feature representation in later blocks, suggesting that their approach alleviates but not fully solves the representation degradation. Contrary, Figure 10b of the appendix shows that refined models achieve peak representation quality in the last block without any feature degradation. We would therefore consider CAE and MIM-Refiner to be orthogonal where CAE presents an improved pre-training method that could also be refined for even better representation. However, as the largest CAE model is a ViT-L/16 and our focus is on large-scale models, we focus on MIM methods that published even larger models.\\n\\n**Full-finetuning and dense prediction**\\n\\nOur work builds on the insights of sequential MIM -> ID pre-training methods [1, 2] (as discussed in the related work section) to refine MIM models. These related works do not show improvements or even degradation of full fine-tuning and dense prediction tasks. However, our proposed improvements over previous approaches greatly improve representation quality and, consequently, we do not find our models to suffer from these issues. We show this in Table 5, where our models are on-par or slightly better in full fine-tuning settings with large amounts of data. While we agree that refinement with an ID objective is unlikely to bring major performance gains in full fine-tuning with large amounts of labels, we want to stress that this is the strong suit of MIM models where MIM models show state-of-the-art results. MIM-Refiner therefore heavily improves MIM models in a plethora of benchmarks while preserving (or even slightly improving) upon their state-of-the-art full fine-tuning performances. A visual depiction thereof is provided in Figure 2 left, where MIM-Refiner effectively unifies the advantages of MIM and ID.\\n\\nAdditionally, it has been demonstrated that MAE is exceptionally good at COCO object detection and instance segmentation, where training a pre-trained MAE further via weakly-supervised training on a web-scale dataset of 3 billion images even degraded performance vs a plain ImageNet-1K pre-trained MAE by 0.7 AP [3]. If not even 3B additional images can boost object detection performance, we do not expect a significant performance gain from our extremely short refinement process that does not add additional data.\\n\\nNevertheless, we agree that it is important to investigate whether or not the refinement degrades performance on COCO object detection and instance segmentation. We therefore conduct experiments in the setting suggested by reviewer HaAb where we train MAE and MAE-Refined with a Mask R-CNN head using the ViTDet framework on COCO. The results suggest that the refinement process preserves the representation quality also for object detection and instance segmentation downstream tasks. We show results below and added them to the paper (Appendix B.10), together with the above discussion.\\n\\n\\n\\n| Model | AP$^\\\\text{box}$ | AP$^\\\\text{mask}$ |\\n|---|---|---|\\n| MAE | **53.5** | **47.8** |\\n| MAE-Refined | **53.5** | 47.7 |\\n\\n\\n[1] Lehner 2023, \\\"Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget\\\" https://arxiv.org/abs/2304.10520\\n\\n[2] Jiang 2023, \\\"Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Label-Efficient Representations\\\" https://openreview.net/forum?id=jwdqNwyREyh\\n\\n[3] Singh 2023, \\\"The effectiveness of MAE pre-pretraining for billion-scale pretraining\\\" https://arxiv.org/abs/2303.13496\"}", "{\"comment\": \"Thank you for your valuable review and suggestions to further strengthen the practicality of our approach. We are happy that the storyline of our paper was well understood and appreciated. We respond to your questions below.\\n\\n**Overhead of queue**\\n\\nWe use a queue size of 65K where, notably, the queue operates within the bottleneck dimension (256 for all model sizes) of the contrastive head.\\nThe topk NN is found by calculating the cosine similarity for a given sample with all 65K queue entries, and then randomly selecting one of the top k entries in the similarity matrix. Additionally, no gradients flow through the NN-swap so it does not add overhead to the backward pass. These considerations, together with the fact that we train large models lead to a minor overhead from the NN queue. We compare runtimes without a queue, with a top1 NN-swap and with a top20 NN-swap below, which we also included into the paper (Appendix B.17).\\n\\n| Queue size | topk | L/16 | H/14 | 2B/14 |\\n|---|---|---|---|---|\\n| 0 | - | 12.8s | 30.1s | 76.7s |\\n| 65K | 1 | 12.9s | 30.2s | 76.9s |\\n| 65K | 20 | 13.0s | 30.5s | 77.5s |\\n\\n\\n**Evaluation on dense prediction tasks**\\n\\nDense prediction tasks are an important area of computer vision where MIM models have demonstrated exceptional performance. We chose ADE20K semantic segmentation as benchmark for dense downstream tasks as it has established protocols for evaluation via a linear probe (Table 4) and full fine-tuning with feature pyramids and a segmentation head (Table 5) where MIM-Refiner shows strong improvements in linear probing and slight improvements in full fine-tuning. Additionally, it has been demonstrated that MAE is exceptionally good at COCO object detection and instance segmentation, where training a pre-trained MAE further via weakly-supervised training on a web-scale dataset of 3 billion images even degraded performance vs a plain ImageNet-1K pre-trained MAE by 0.7 AP [1]. If not even 3B additional images can boost object detection performance, we do not expect a significant performance gain from our extremely short refinement process that does not introduce additional data.\\n\\nNevertheless, we agree that it is important to investigate whether or not the refinement degrades performance on COCO object detection and instance segmentation. We therefore conduct experiments in the suggested setting where we train MAE and MAE-Refined with a Mask R-CNN head using the ViTDet framework on COCO. The results suggest that the refinement process preserves the representation quality also for object detection and instance segmentation downstream tasks. We show results below and added them to the paper (Appendix B.10), together with the above discussion.\\n\\n\\n\\n| Model | AP$^\\\\text{box}$ | AP$^\\\\text{mask}$ |\\n|---|---|---|\\n| MAE | **53.5** | **47.8** |\\n| MAE-Refined | **53.5** | 47.7 |\\n\\n\\n[1] Singh 2023, \\\"The effectiveness of MAE pre-pretraining for billion-scale pretraining\\\" https://arxiv.org/abs/2303.13496\\n\\n\\n**Representation degradation in other modalities**\\n\\nWe find it an interesting avenue to explore and will follow-up shortly, as we do not have a setup for the other D2V2 modalities ready-to-go and focused on getting timely object detection result to facilitate discussion.\\n\\nHowever, we did have a pipeline for VideoMAE [2] and AudioMAE [3] ready-to-go, which shows a similar trend, interestingly enough, also for smaller models. We include these preliminary result Appendix E.\\n\\n[2] Tong 2022, \\\"VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training\\\" https://arxiv.org/abs/2203.12602\\n\\n[3] Huang 2022, \\\"Masked Autoencoders that Listen\\\" https://arxiv.org/abs/2207.06405\"}", "{\"summary\": \"The paper focuses on bridging the gap between large MIM pre-trained models and SOTA methods. The paper first discovers that MIM models have different types of blocks: those that mainly improve the pre-training objective and others that are responsible for abstraction. Then, the paper proposes a method MIM-Refiner, which adds Instance Discriminator heads on pre-trained MIM models for refinement. The ID heads exploit the intermediate representations to consistently improve the performance of MIM pretrained models. While the performance gains on large dataset full-finetuning are small, the proposed methods show remarkable gains on few-shot settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper first points out the influence of the lightweight decoder on the feature learning of the encoder in MIM methods.\\n2. The analyzing part is well-written.\", \"weaknesses\": \"1. The description of the method and experimental setup needs to be clarified. (a) Which blocks need to be fine-tuned during refinement, or do all blocks need to be fine-tuned? (b) How many epochs are needed to refine different models? (c) What is the structure of the ID head? Answers to all these questions should be contained in the manuscript.\\n2. Unfair comparison. The paper misses an important baseline - train the original model with 0 heads with the same epochs to demonstrate the importance of refinement (instead of just training more epochs).\\n3. Some typos. L267-269, see Table 1 instead of Figure 3b.\", \"questions\": \"Please refer to the weakness. I believe a clear description of the method and experimental setup is one of the most important things when writing a paper (weakness 1).\", \"additional_question\": \"what does the \\u201crelative\\u201d in Figure 3(d) mean? Does the value calculated by the performance of ( the i+1 th layer - the i-th layer)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Pixel-level representation vs DINOv2**\\n\\nDINOv2 is an excellent vision foundation model that has demonstrated outstanding performances across various tasks. However, at its core, DINOv2 is a scaled up version of iBOT which uses a dataset of 142M curated dataset of high-quality images. Notable, the dataset is created by retrieving images that are similar to those used for, e.g., semantic segmentation (ADE20K, Cityscapes, Pascal VOC) from an extremely large web-crawled image collection. This dataset curation procedure, together with the patch-level objective of iBOT allows DINOv2 to learn strong pixel-level representations. However, it can not be understated that an essential contributing to this performance is the private web-scale dataset that is highly curated which makes a fair comparison against DINOv2 impossible, as a model pre-trained on 100x more data will obviously outperform a model trained on ImageNet-1K in most cases. The closest comparison to DINOv2 is to compare against the models published by iBOT as their underlying training methodology is identical. We compare against iBOT on all benchmarks where we outperform it by large margins in all settings, including ADE20K segmentation with a frozen encoder (Table 4) and full fine-tuning on ADE20K segmentation (Table 5) using an UperNet semantic segmentation head.\\n\\nDue to these insights, together with the demonstrated scalability of MIM for billion-scale datasets and model sizes [4], we hypothesize that MIM-Refiner can scale way beyond ImageNet-1K, potentially even outperforming DINOv2 due to highly efficient pre-training. To put it into perspective, [4] trained MAE models up to 6.5B parameters whereas the largest DINOv2 model is 1.1B parameters. MIM-Refiner could leverage the efficient pre-training of MAE to train a 6.5B parameter model followed by a short refinement process on, e.g., the curated DINOv2 dataset which would require a fraction of the compute it would take to train a 6.5B DINOv2 model.\\n\\n\\n[4] Singh 2023, \\\"The effectiveness of MAE pre-pretraining for billion-scale pretraining\\\" https://arxiv.org/abs/2303.13496\\n\\n**Pixel-level representation preservation under frozen backbone**\\n\\nWe evaluate the performance of MIM models and their refined versions on the pixel-level task of ADE20K semantic segmentation with a frozen encoder in Table 4. The refined models show significant gains in mIoU over their unrefined counterparts across all model sizes. Table 15 in the appendix confirms these results on many more models.\"}", "{\"summary\": \"This paper presents a contrastive learning boosting method called MIM-Refiner to refine the features of pre-trained MIM models. MIM-Refiner leverages multiple instance discrimination heads (ID) which are connected to different immediate layers. Each ID head contains a contrastive loss that captures semantic information to improve the quality of learned representations. By training a few epochs, the features of MIM-Refiner surpass the current MIM models on multiple experiments: on ViT-H with data2vec 2.0 on ImageNet-1K, the accuracy of the proposed method reaches state-of-the-art performance on linear probing and low-shot classification.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper proposes a detailed analysis of the blocks of MIM models in which different blocks extract features with a specific focus and the most efficient features learned by MIM are from the middle blocks.\\n\\n2. A contrastive learning-based method called MIM-Refiner is proposed to refine the representation of current MIM models by attaching the middle layers with ID objective. \\n\\n3. Experimental results show the effectiveness and generalization ability of MIM-Refiner on downstream tasks.\", \"weaknesses\": \"1. As the discussion of end-to-end training, the proposed method MIM-Refiner seems to be a two-step training method, with first step training MIM models and fine-tuning the updated models by incorporating ID heads to middle layers. Practically, this might increase the complexity of the training paradigm and deployment. Is it possible to improve the proposed method with end-to-end training on MIM and ID? If not, what are the potential bottlenecks to circumvent this goal?\\n\\n2. There is no overview diagram that shows the detailed architecture of MIM-Refiner or how the training diagram goes. The diagram in Figure 4 provides partial information but does not clearly illustrate these points.\", \"questions\": \"Please refer to weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"After discussion, this submission received 4 positive scores . The major concerns about the experimental details, \\u00a0preliminary analysis\\u00a0 and method clarity were comprehensively solved. After reading the paper, the review comments and the rebuttal, the AC thinks the remaining issue is to include all the revised content to the camera-ready version and correct typographical errors.\", \"additional_comments_on_reviewer_discussion\": \"After discussion, all the reviewers gave the positive scores and did not raise further concerns or issues.\"}", "{\"comment\": \"Thank you for your profound review and suggestions that helped us to improve our paper. We are glad that the analysis part was found to be a nice read. We corrected the typos and address your points below.\\n\\n**Clarifications on experimental setup**\\n\\nWe agree that the most important hyperparameters should be presented in the main text. Therefore, we restructured the beginning of Section 4 to include an overview of the experimental setting. Your comment also made us aware that we never referred to the full set of implementation details, as included in Appendix C, including how many blocks are finetuned (C.5 Table 22), how many epochs are needed for different models (C.5 Table 22) or the ID head structure (C.4).\\nWe fixed this oversight by appropriately referring to it in Section 4.\\n\\n\\n**Comparison against prolonged MIM training**\\n\\nWe provide a comparison similar to the proposed one already within our paper, where CrossMAE-L is pre-trained for only 800 epochs (due to computational resource restrictions of the authors), while MAE-L is pre-trained for 1600 epochs. One can clearly see in, e.g., Table 10 or Table 14 of the appendix that CrossMAE-Refined outperforms MAE, where CrossMAE-Refined is pre-trained for 800 epochs followed by 30 epochs of refinement while MAE is pre-trained for 1600 epochs.\\n\\nWhile we see that this is not a perfect comparison due to the differences in the decoder between MAE and CrossMAE, we believe that it provides sufficient evidence to underline the effectiveness of our method. Unfortunately, a direct comparison by prolonging the training of a pre-trained MIM model has multiple issues as we will outline below.\\n\\nFirst, MIM models are commonly pre-trained using a cosine annealing learning rate schedule, which means that for a proper comparison, one would need to train the whole model from scratch with a higher epoch count, which is extremely expensive (note that we simply downloaded pre-trained MIM checkpoints and never needed to train one from scratch). During the later epochs of the cosine annealing schedule, the model is updated with tiny learning rates. If one would use the pre-trained model and start training it for some more epochs, using a standard warmup into cosine annealing schedule and the same pre-training objective, it would essentially destroy the updates of the last few epochs due to increasing learning rate, before conducting the \\\"same\\\" updates again once the learning rate decreases again.\\n\\nSecond, often times, only the ViT encoder is published (without the decoder), so prolonging the training would require some sort of mechanism to initialize the decoder as starting the training with a randomly initialized decoder would most likely degrade the encoder features until the decoder has learned to produce sensible reconstructions.\\n\\nThird, MIM models are often trained for many epochs. For example, MAE models are trained for 1600 epochs. It is highly unlikely that prolonging this training by, e.g., another 50 epochs, will significantly change the performance of the model as performance gains tend to saturate.\\n\\nLastly, the number of epochs for MIM pre-training is often optimized as a hyperparameters. For example, in data2vec 2.0 the amount of epochs is decreased with model size, which suggests that larger models converge faster. Therefore, adding additional epochs could also degrade performance due to overfitting. While compute costs could be another explanation why the number of epochs was decreased with model size, we find the overfitting explanation more realistic as D2V2 is extremely compute efficient and D2V2-B or D2V2-L could have easily been trained longer as their compute budget also included training of a D2V2-H model. Therefore, prolonged training of smaller models would not have significantly impacted the total compute budget.\\n\\n\\n**Clarification on \\\"relative\\\" in Figure 3d**\\n\\nYour conclusion is correct, we calculate the relative improvement by subtracting the performance of block i + 1 from the performance of block i. Additionally, we divide by the maximum relative improvement to put both performance metrics (k-NN accuracy and reconstruction loss) into the same value range with 1 as upper bound. We included the methodology to calculate the relative improvement into the caption of Figure 3 and expanded the description in Appendix D.2.\"}", "{\"summary\": \"---\\n\\n## **Summary**\\n\\nThe paper identifies the representation degradation issue in Masked Image Modeling (MIM)-pretrained large foundation models. To address this, the authors propose a simple yet effective method to prevent degradation and further improve the representation quality of MIM methods by adding auxiliary contrastive losses to the last layers of Vision Transformers (ViTs) on top of the MIM objective. The paper provides improved performances with large margins over current state-of-the-art (SOTA) methods through extensive experiments and rigorous analysis, demonstrating the success of the proposed approach.\\n\\n---\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"---\", \"## **Strengths**\", \"The paper is well-written, with clear observations, a well-developed motivation, a straightforward idea, a clearly-stated method, extensive experiments, and comprehensive analysis.\", \"It effectively identifies the representational degradation phenomenon in large visual foundation models pre-trained with MIM self-supervised learning (SSL), providing evidence through multiple experiments.\", \"The proposed method offers a simple and effective solution to prevent this issue and improve the representation quality of MIM SSL.\", \"Rigorous experiments and analysis are conducted to show the success of the proposed method, with large improvements over current SOTA.\", \"---\"], \"weaknesses\": \"---\\n### **Limitations**\\n\\n1. To prevent representation quality degradation in the last layers of ViTs, the authors experiment with contrastive loss, which requires constructing a queue/pool for positive and negative samples. I noticed the proposed method uses a top 20-NN approach to retrieve positive samples in the queue, which could contribute significantly to the increased training time per step. What's the queue size used? how much does it contribute to the increased training time per step?\\n\\n2. Since the paper emphasizes preserving the richness of representations, evaluation on dense prediction tasks such as object detection and instance segmentation (OD/IS) would be valuable, in addition to the provided segmentation probing on ADE20K.\\n\\n - It would be meaningful to compare the performance of MIM-refiner-pretrained ViT-L on COCO object detection against MAE-pretrained ViT-L following the ViTDet framework [1].\\n\\n - [1] Li, Y., Mao, H., Girshick, R., & He, K. (2022, October). *Exploring plain vision transformer backbones for object detection.* In European Conference on Computer Vision (pp. 280-296). Cham: Springer Nature Switzerland.\\n\\n---\\n\\n### **Recommendation**\\n\\nConsidering the strengths and weaknesses discussed above, my recommendation for this paper is **ACCEPT**. This is a strong paper with a clear contribution.\", \"questions\": \"---\\nSince D2V2 is used as a baseline, does the representation degradation issue also appear in the audio and language domains?\\n\\n---\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We updated the paper to include an analysis for feature degradation for models of different modalities that are pre-trained by reconstructing a part of the masked input in Appendix E. Our analysis covers 3 different domains (Video, Language, Audio) where feature degradation is present in all modalities.\"}", "{\"title\": \"General Response\", \"comment\": [\"We thank all reviewers for their positive feedback, constructive comments and suggestions.\", \"We are pleased to see that the reviewers highlighted the clarity of our paper and appreciated our analysis as well as our extensive experiments. Several reviewers recognized the clear motivation of our approach from the thorough analysis.\", \"We updated the paper to incorporate the feedback of the reviewers. To summarize, we made the following changes:\", \"Added more details to the experimental setup in the main paper (first paragraph in Section 4) such as training duration and included a reference to the extensive implementation details and hyperparameters in the appendix (reviewer Auz1).\", \"Included methodology to calculate the relative improvement in Figure 3d also in the caption instead of only in the appendix (reviewer Auz1).\", \"Added results for COCO object detection and instance segmentation in Appendix B.10 (reviewer HaAb and JDMo)\", \"Added preliminary analysis of feature degradation of masked pre-training in other modalities in Appendix E (reviewer HaAb).\", \"corrected various typos\", \"Additionally, we respond to each review individually, addressing the raised questions and concerns.\"]}", "{\"comment\": \"**Representation degradation in other modalities**\\n\\nWe updated the paper to also include results for models pre-trained with masked language modeling. As D2V2 only trains a single model size (ViT-B) for its language modeling experiments, we instead opt for RoBERTa models which train models up to a ViT-L (and are also pre-trained with a masked language modeling objective). Our analysis now covers 3 different domains (Video, Language, Audio) where feature degradation is present in all modalities.\"}" ] }
0PcJAHbSmc
DrivingRecon: Large 4D Gaussian Reconstruction Model For Autonomous Driving
[ "Hao LU", "Tianshuo Xu", "Wenzhao Zheng", "Yunpeng Zhang", "Wei Zhan", "Dalong Du", "Masayoshi Tomizuka", "Kurt Keutzer", "Ying-Cong Chen" ]
Photorealistic 4D reconstruction of street scenes is essential for developing real-world simulators in autonomous driving. However, most existing methods perform this task offline and rely on time-consuming iterative processes, limiting their practical applications. To this end, we introduce the Large 4D Gaussian Reconstruction Model (DrivingRecon), a generalizable driving scene reconstruction model, which directly predicts 4D Gaussian from surround-view videos. To better integrate the surround-view images, the Prune and Dilate Block (PD-Block) is proposed to eliminate overlapping Gaussian points between adjacent views and remove redundant background points. To enhance cross-temporal information, dynamic and static decoupling is tailored to learn geometry and motion features better. Experimental results demonstrate that DrivingRecon significantly improves scene reconstruction quality and novel view synthesis compared to existing methods. Furthermore, we explore applications of DrivingRecon in model pre-training, vehicle adaptation, and scene editing. Our code will be made publicly available.
[ "4D Gaussian Reconstruction; Autonomous Driving" ]
Reject
https://openreview.net/pdf?id=0PcJAHbSmc
https://openreview.net/forum?id=0PcJAHbSmc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zo3HY9Hemd", "zFrI2thmnx", "zCLpl0JvBT", "y3aFjP1w17", "w1V4yrsX4T", "fhJlOVUraz", "eT8xK9FLQQ", "b0Ykr8itp4", "YwTTloPV2X", "VznDbZG9RC", "RlAepkxq8c", "NuQJu6e2ae", "K23KfTyBY0", "K0HJSnDVYa", "JNdf6bYPFB", "GfN1FZivsq", "CZy2eYcN5V", "A3hK639PQb", "9HBZcT91mC", "7aV5B7xJJm", "6Jf4Zeo4Jk", "1ioCujMJI3" ], "note_type": [ "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1733073156840, 1732325702913, 1737523376092, 1730466526235, 1732250136753, 1733191418601, 1732555101222, 1732520583816, 1732556718463, 1732550527516, 1732554975527, 1732496318919, 1732248082913, 1730613361321, 1729568961352, 1732496366768, 1732247198068, 1733024579232, 1732496392459, 1732246239412, 1732248198824, 1734506085438 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission81/Reviewer_conn" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission81/Reviewer_3kKi" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Reviewer_EbAv" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Reviewer_3kKi" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Reviewer_conn" ], [ "ICLR.cc/2025/Conference/Submission81/Reviewer_EbAv" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Authors" ], [ "ICLR.cc/2025/Conference/Submission81/Area_Chair_MxhF" ] ], "structured_content_str": [ "{\"comment\": \"Thanks the author for the response. I move the discussion here as I share similar concerns as Reviewer EbAv.\\n1. The quality in the video is way too low to call it a reconstruction method.\\n2. While the papers you mentioned can be furthur simplified, I still feeling drivingRecon is more compliciated and entangled than necessary, with the requirement of more specialized modules, pretrained models and prior.\\n3. The speed comparison is not solid. In my experience, Streetgaussian, with some hyperparameter tuned, trained for 5-10 minutes could end up with higher results than DrivingRecon. And the claim of DrivingRecon's 1.21s timecost is not full log reconstruction cost (even though the authors don't mentioned, but I expect it's per timestamp). So this seems an unfair (at least unsolid) comparison. Especially, the comparison setup for 3DGS ( \\\"At each time step , a 3DGS model needs to be trained. \\\") is particularly problematic, as these methods can be trained on all frames simultaneously to achieve better and more temporally consistent results. Lastly, DrivingRecon requires significant GPU resources for training (24 NVIDIA A100 80GB GPUs as stated in the paper). With these training cost, we could reconstruct hundreds of logs in higher quality.\"}", "{\"title\": \"General Response\", \"comment\": \"We fixed some minor typos and uploaded the paper. If you have any further questions or if anything remains unclear, please don\\u2019t hesitate to let us know. We would be more than happy to discuss your concerns in greater detail.\\n\\nWe have made minor revisions to the paper, all of which are summarized as follows:\\n\\n**Efficiency comparison:** Our approach is compared with both traditional optimization methods and the latest generalized feedforward networks in terms of latency and memory usage. Results and discussion are in section C of supplementary material.\\n\\n**Detailed experimental process:** We have fixed some incorrect typos. We give more details of the pre-trained model and the image editing experiment. We will proofread the paper thoroughly to enhance its writing, presentation, and layout.\\n\\n\\nBest wishes,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes a learning-based reconstruction method in a feed-forward manner in driving scenarios. It could predict 4D Gaussian primitives from multi-view temporal input. It is a very early work that explores learning-based generalizable reconstruction and rendering for autonomous driving. This paper also introduces a couple of downstream applications such as model pre-training and vehicle adaptation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"It is a very early work that explores learning-based generalizable reconstruction methods for autonomous driving, demonstrating this paradigm could work in real-world driving scenarios.\", \"This paper is comprehensive since it not only develops the methods but also incorporates potential applications such as perception and driving tasks.\", \"The self-supervised pretraining task is insightful.\"], \"weaknesses\": [\"This paper does not demonstrate the model's generalization to different viewpoints. The authors claim the ability of vehicle adaption. However, only the camera intrinsic is changed. Could the predicted 4D Gaussians produce good rendering quality in viewpoints beyond the driving trajectories (different extrinsic)? A recent work[1] explores this direction.\", \"The resolution is relatively low. The produced rendering quality cannot meet the requirements of practical use, such as camera simulation.\", \"It would be better to show the inference latency.\", \"The authors do not provide video demonstrations of the rendering results. It is hard to have a intuitive understanding of the actual performance.\", \"[1] Freevs: generative view synthesis on free driving trajectory.\"], \"questions\": \"How does the scene edit (Fig.6) work? This procedure can be more detailed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer 3kKi\", \"comment\": \"Thank you for your thoughtful comments and for taking the time to review our work. Your feedback is genuinely appreciated, and I hope to clarify and enhance our responses to your concerns.\\n\\n### **(W1 and W4) Generalization to Different Viewpoints**\\n\\nTo demonstrate the effectiveness of our method in generating new perspectives, we have provided a series of reconstruction videos available at the following link: [Drive-Recon Videos](https://anonymize58426.github.io/Drive-Recon/). In these videos, you can observe the translation of lane lines between 3 to 12 seconds and the rotation of viewpoints between 17 to 26 seconds. This effectively illustrates that the scenes reconstructed by our algorithm maintain geometric consistency across varying perspectives.\\n\\nAdditionally, I appreciate your reference to the Freevs method; it is indeed an intriguing approach. I believe we can adapt some of its useful components to enhance our model's ability to generate new viewpoints.\\n\\n### **(W2) Limitations of Resolution**\\n\\nCurrently, it seems that feedforward networks yield slightly lower rendering quality in driving scenarios compared to traditional optimization methods. However, I am hopeful that advancements in network architectures, training strategies, and generative techniques will soon enable feedforward networks to exceed the capabilities of optimization methods. Our paper serves as a preliminary study in the driving domain and provides a codebase that may accelerate progress in this area.\\n\\nMoreover, existing state-of-the-art generalizable Gaussian splatting algorithms tend to operate at lower resolutions and realism, particularly in indoor scenes [1, 2, 3]. Therefore, we believe our work is still at the forefront of the field of generalizable Gaussian splatting.\", \"references\": \"1. Mvsplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images. *ECCV* 2025.\\n2. Large Spatial Model: End-to-End Unposed Images to Semantic 3D. NeurIPS 2024.\\n3. FreeSplat: Generalizable 3D Gaussian Splatting Towards Free-View Synthesis of Indoor Scene Reconstruction. NeurIPS 2024.\\n\\n### **(W3) Lack of Efficiency Analysis**\\n\\nThank you for highlighting the need for an efficiency analysis. We will include a thorough evaluation of our method's efficiency in the revised paper. Based on the evaluation protocol outlined in Table 1, we compared the speed and PSNR of our method against traditional optimization methods:\\n\\n| Method | PSNR | SSIM | LPIPS | Time Cost |\\n| --- | --- | --- | --- | --- |\\n| 3D-GS | 24.91 | 0.71 | 0.16 | 5.5h |\\n| DrivingGaussian | 26.12 | 0.74 | 0.13 | 6.2h |\\n| **Ours** | **23.70** | **0.68** | **0.17** | **1.21s** |\\n\\nAs indicated in the table, our algorithm performs comparably to traditional optimization methods in terms of PSNR while significantly reducing time costs. This efficiency makes our method more suitable for data-driven applications, such as driving simulators. The experiment was updated to part C of supplementary materials.\\n\\n### **(Q) Details of Scene Editing**\\n\\nTo edit scenes, we can utilize existing 3D generation models to create Gaussian representations of arbitrary objects, such as LGM [1]. We then modify the Gaussian representation of the x, y, and z positions (in world coordinates) to place them appropriately within the driving scenario. Importantly, the Gaussians predicted by our method are represented within the world coordinate system.\", \"reference\": \"1. LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation.\\n\\nThank you for your attention to our responses. If you have any further questions or if anything remains unclear, please don\\u2019t hesitate to let us know. We would be more than happy to discuss your concerns in greater detail.\"}", "{\"title\": \"Further Response\", \"comment\": \"Dear Reviewers,\\n\\nThanks the author for the response. I always admit that optimized methods are still of better quality than generalized methods. I'm not saying that my model can be simplified, I'm saying that I can make better use of some new techniques to further optimize the algorithm. Five minutes to train Streetgaussian still takes more than 200 times longer than generalizable methods. And the iterative optimization method does not require less memory than our algorithmic inference needs. By the way, using the original Streetgaussian parameters required 2 hours of training on the 3090 or A100 and 1.5 hours on the 4090. The original 3DGS cannot be trained across time series because it has no separation of static and moving objects. And I'm using DrivingGaussian instead of Streetgaussian. I insist that generalizable Gauss is promising, and you are too harsh on the first 4D driving generalizable reconstruction paper.\\n\\nBest,\\nAuthors\"}", "{\"title\": \"Further Response\", \"comment\": \"Dear Reviewer EbAv,\\n\\nThank you for carefully reading our response and providing further comments. I hope our further discussion will allow you to change your opinion. \\n\\nMost importantly, multiple supervision and complex modules are not a weakness to reject a paper: (1) optimization-based driving reconstruction algorithm approaches will also have multiple supervision (point cloud initialization and segmentation of dynamic and static objects) [1,2,3,4]. When reasoning, our models do not need depth and segmentation labels. Besides, our method can indeed be trained without requiring semantic segmentation or 3D boxes. As shown in Table 4(a), we can learn geometry through perspective consistency and point cloud depth information without the need for Dynamic and Static Rendering (DS-R). In the W1 response we also explained that our approach can be done without depth supervision. (2) For learnable driving tasks, such as perception task, they also have multiple modules including temporal fusion model, multi-view fusion model, image encoder model, image decoder and detection [5, 6, 7]. \\n\\n[1] Drivinggaussian: Composite gaussian splatting for surrounding dynamic autonomous driving scenes. *CVPR 2024*\\n\\n[2] OmniRe: Omni Urban Scene Reconstruction. *arXiv preprint arXiv:2408.16760* (2024).\\n\\n[3] Street gaussians for modeling dynamic urban scenes. ECCV2025\\n\\n[4] Unisim: A neural closed-loop sensor simulator. CVPR2023.\\n\\n[5] Exploring object-centric temporal modeling for efficient multi-view 3d object detection. CVPR2023\\n\\n[6] Bevformer: Learning bird\\u2019s-eye-view representation from multi-camera images via \\nspatiotemporal transformers. ECCV2023\\n\\n[7] Panoocc: Unified occupancy representation for camera-based 3d panoptic segmentation. CVPR2024\\n\\n### **Compare the existing FeedForword method**\\n\\nThanks to the reviewer's reminder, we should indeed evaluate the efficiency of other SOTA forward generalizable models.\\n\\n| | PSNR | SSIM | LPIPS | Time Cost | Memory |\\n| --- | --- | --- | --- | --- | --- |\\n| LGM | 19.52 | 0.52 | 0.32 | 1.82s | 21.42G |\\n| pixelSplat | 20.54 | 0.58 | 0.28 | 2.44s | 19.65G |\\n| MVSplat | 21.33 | 0.64 | 0.24 | 1.64s | 15.47G |\\n| L4GM | 20.01 | 0.54 | 0.30 | 1.98s | 23.74G |\\n| Ours | 23.70 | 0.68 | 0.17 | 1.21s | 11.08G |\\n\\nAs shown in the table, our method is significantly optimal in reasoning speed and memory usage. For the automatic driving scene, the efficiency of our method is due to: (1) Multi-view fusion better integrates multiple views with small overlap through **the form of range view**. (2) Temporal fusion is the fusion of highly compressed implicit features, which greatly reduces memory and inference delay. (3) Image encoder and decoder are **shared** for different views and can be inferred in parallel.\", \"disadvantages_of_other_methods\": \"(1) For the input of multiple views, MVSplat needs to calculate the cost volume between any two image pairs, which greatly increases the computational memory and inference delay. (2) LGM and L4GM cat all the images into a multi-view attention fusion network. The uncompressed image sent to the view fusion network consumes memory and increases inference delay. In addition, the small overlap of different perspectives in the driving scene does not require such redundant attention mechanisms. (3) pixelSplat uses the polar coordinate attention fusion mechanism to integrate different perspectives. The small overlap of different perspectives in the driving scene does not require such redundant attention mechanisms. Specifically, a large number of queries are empty.\\n\\n### **Limitations of the Original 3DGS**\\nIn our efficiency comparison experiments, the original 3DGS heavily also relies on point cloud supervision to initialize the Guassian points. At each time step $t$, a 3DGS model needs to be trained. This is why it takes a significant amount of time, approximately 5.5 hours, to reconstruct a 200-frame video using 3DGS. However, the reconstruction performance tends to overfit, and its ability to synthesize novel view is bad, as shown in the table below. \\n\\n| | Reconstruction | | | Novel View | | | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | Time Cost |\\n| 3D-GS | 24.91 | 0.71 | 0.16 | 18.81 | 0.55 | 0.31 | 5.5h |\\n| DrivingGaussian | 26.12 | 0.74 | 0.13 | 22.34 | 0.74 | 0.19 | 6.2h |\\n| Ours | 23.70 | 0.68 | 0.17 | 20.63 | 0.61 | 0.21 | 1.21s |\\n\\nAs shown in the table, 3DGS significantly deteriorated in new view synthesis. At each time step, there are only a few observed viewpoints in driving scenes (such as 6 views in nuScenes and 5 in Waymo) to supervise 3DGS. Besides, the overlap between these surround views is minimal, making it challenging for 3DGS to accurately learn the geometry. To address this, DrivingGaussian and ours use segmentation to identify the static background and perform cross-temporal supervision.\\n\\n If you have any further questions or if anything remains unclear, please don\\u2019t hesitate to let us know.\"}", "{\"comment\": \"I appreciate the authors' effort in pioneering the exploration of feed-forward autonomous 4DGS reconstruction and the authors\\u2019 rebuttal does solve some of my concerns (W1&W2).\\n\\nHowever, I think the performances presented in this paper may not be sufficient to fully support its motivation. This paper requires multiple inputs (depth supervision/semantic segmentation) and heavy network architecture(image encoder&decoder/TCA), but shows a significant drop in PSNR compared to optimization based reconstruction method(e.g. drivingGaussian), and does not provide efficiency comparisons with other feed-forward reconstruction(As the proposed network is more complex than other methods, I guess it will need more time or memory). The aforementioned problems limit the pratical use of this paper. \\n\\n\\nMore comments\", \"w3\": \"The original 3DGS is designed for static scene without depth or segmentation supervision which makes comparison to 3DGS less meaningful.\", \"w4_4\": \"The authors claim that a 200-frame video might take around 6h for optimization-based reconstruction, while I believe the time overhead should not be this significant.\"}", "{\"comment\": \"Dear 3kKi\\n\\nThank you for carefully reading our response and providing further comments. Here is our further response:\\n\\n### **Compare the existing FeedForword method**\\n\\nWe apologize for our misunderstanding. Our method in the table is inference latency, and the other optimization method is optimization time. The inference latency of **rendering part** is only about 0.04 seconds. In addition to optimization-based methods, we further evaluated the efficiency of other SOTA forward generalizable models.\\n\\n| | PSNR | SSIM | LPIPS | Time Cost | Memory |\\n| --- | --- | --- | --- | --- | --- |\\n| LGM | 19.52 | 0.52 | 0.32 | 1.82s | 21.42G |\\n| pixelSplat | 20.54 | 0.58 | 0.28 | 2.44s | 19.65G |\\n| MVSplat | 21.33 | 0.64 | 0.24 | 1.64s | 15.47G |\\n| L4GM | 20.01 | 0.54 | 0.30 | 1.98s | 23.74G |\\n| Ours | 23.70 | 0.68 | 0.17 | 1.21s | 11.08G |\\n\\nAs shown in the table, our method is significantly optimal in reasoning speed and memory usage. For the automatic driving scene, the efficiency of our method is due to: (1) Multi-view fusion better integrates multiple views with small overlap through the form of range view. (2) Timing fusion is the fusion of highly compressed implicit features, which greatly reduces memory and inference delay. (3) Image encoder and decoder are shared for different perspectives and can be inferred in parallel.\", \"disadvantages_of_other_methods\": \"(1) For the input of multiple graphs, MVSplat needs to calculate the cost volume between any two images, which greatly increases the computational memory and inference delay. (2) LGM and L4GM cat all the images into a multi-view attention fusion network. The uncompressed image sent to the view fusion network consumes memory and increases inference delay. In addition, the small overlap of different perspectives in the driving scene does not require such redundant attention mechanisms. (3) pixelSplat uses the polar coordinate attention fusion mechanism to integrate different perspectives. The small overlap of different perspectives in the driving scene does not require such redundant attention mechanisms. Specifically, a large number of queries are empty.\\n\\n### **Some issues of novel views**\\n\\nFor driving scenes, synthesizing new views is still a very challenging problem. Even well-known optimization methods, such as DrivingGuassian, do not render new perspectives well. The Freevs you mentioned is a good new view synthesis solution, even if it is only the latest paper appearing on arxiv. In addition, Freevs is only a generative method for synthesizing new view images, rather than a 3D/4D reconstruction method for predicting Gaussians. These are the two routes. DriveRecon makes it very easy to incorporate these new method into a synthetic solution. I believe our approach has great potential.\\n\\nThank you for your attention to our responses. If you have any further questions or if anything remains unclear, please don\\u2019t hesitate to let us know. We would be more than happy to discuss your concerns in greater detail.\", \"title\": \"Further Response\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for providing the additional experiment. However, I asked for the inference latency (rendering ) in my original review. The authors seem to provide the optimization time for 3DGS and DrivingGaussian.\\n\\nThe generalization of new viewpoints seems to have some issues since there are clear unreasonable deformations or artifacts when laterally shifting the viewpoints.\"}", "{\"title\": \"Futher Disscussion\", \"comment\": \"Dear Reviewer conn,\\n\\nThank you for carefully reading our response and providing further comments. This is our further reply:\\n\\n### **(P1) Complicated Design**\\n\\nWe acknowledge that our approach is complex. But for autonomous driving, it's hard to avoid. For learnable driving tasks, such as perception task, they also have multiple modules including temporal fusion model, multi-view fusion model, image encoder model, image decoder and detection head [1, 2]. These methods, which were considered too complex, have already been deployed to run on actual vehicles. So, this is not a shortcoming to reject our paper. Moreover, the two algorithms you mentioned are also complex [3,4].\\n\\n[1] Exploring object-centric temporal modeling for efficient multi-view 3d object detection. CVPR2023\\n\\n[2] Bevformer: Learning bird\\u2019s-eye-view representation from multi-camera images via\\nspatiotemporal transformers. ECCV2023\\n\\n[3] G3r: Gradient guided generalizable reconstruction.ECCV 2025.\\n\\n[4] SCube: Instant Large-Scale Scene Reconstruction using VoxSplats.\\\" arXiv preprint arXiv:2410.20030 (2024).\\n\\n### **(P2) Other generalizable approaches**\\n\\nThank you for mentioning the two new related papers. The quality of the two papers produced is not significantly better than ours. If you zoom in on Fig.4 both of Paper 3 and Paper 4, you find that their rendering quality is not good enough. Paper 3 and 4 are difficult to compare because of its lack of open source and complex design. Paper 4 appeared after we submitted the paper. In addition, we discuss them further:\\n\\nPaper [3] also lacks realism. In the presentation of his paper, his images (Fig.4) are very small. If you zoom in these pictures are very blurry. And he reported reasoning speeds of 31s and 123s for a single scence. His model could not even predict the 3D Gaussian representation directly, and he needed multiple iterations of the network to predict the 3D Gaussian representation. Besides, his method is not open source and very difficult to reproduce with his complex algorithm.\\n\\nMethod [4] appears on arxiv after we commit ICLR. The visualization in Figure 4 in his paper is not much better than ours. Because we are working with multiple views of multiple times, the resolution of the image we render is lower due to GPU memory. His approach was to take only three images as input and split the model into two stages. This allows him to render more high-resolution images. His method can learn geometric features entirely by relying on dense point cloud supervision, which uses a set of point-cloud dense methods. I believe that our approach has more potential with small improvements: (1) Splitting different time images onto different GPUs and then merging timing features across GPUS, which is already widely used in video generation. In this way we can render more high-resolution images. (2) Our approach further makes good use of pre-trained video models. However, his approach relies on Voxel-based models, which severely limits his upper boundary. (3) The visual encoder of our method can be used as a pre-training model, while his model cannot be used at all.\\n\\nI believe that in the near future, advances in network architecture, training strategies, and generation techniques will enable feedforward networks to transcend optimization methods. For example: (1) Use pre-trained video encoders and decoders to improve the rendering quality. (2) Assign different time images to different GPUs and then perform feature fusion across GPUs, which is a common operation in the field of video generation. (3) Use more driving datasets to train stronger models.\\n\\nIn addition, the most advanced generalizable Gaussian spatter algorithms available generally operate with relatively low resolution and realism in indoor scenes [5,6,7]. Therefore, this paper is still at an advanced level in the field of generalized Gaussian sputtering.\\n\\n[5] Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images. *ECCV* 2025.\\n\\n[6] Large Spatial Model: End-to-end Unposed Images to Semantic 3D. NeurIPS 2024 \\n\\n[7] FreeSplat: Generalizable 3D Gaussian Splatting Towards Free-View Synthesis of Indoor Scenes Reconstruction. NeurIPS 2024 \\n\\n### **UniAD experimental**\\n\\nI would like to express my sincere apologies for the inaccuracy of my reply. I copied the original data of this table from Table 10 from VIDAR[8] ranther than UniAD. The experimental parameters were completely carried out in accordance with page https://github.com/OpenDriveLab/ViDAR. Specifically, we just replaced the image encoder and everything else was exactly the same.\\n\\n[8] Visual Point Cloud Forecasting enables Scalable Autonomous Driving. CVPR 2024\\n\\nThank you for your attention to our responses. If you have any further questions or if anything remains unclear, please don\\u2019t hesitate to let us know. We would be more than happy to discuss your concerns in greater detail.\"}", "{\"title\": \"Discussion request\", \"comment\": \"Dear Reviewer EbAv,\\n\\nI would like to express my sincere gratitude to you for your constructive comments. As the ICLR discussion phase is almost over, I wanted to kindly ask if there are any remaining questions or clarifications needed regarding our responses. Please feel free to reach out at any time.\\n\\nWe would be truly grateful if you could consider raising our rating, as your support is crucial for the potential acceptance of our work.\\n\\nBest wishes,\\n\\nAuthors\"}", "{\"title\": \"Author Response to Reviewer EbAv (Part 1)\", \"comment\": \"---\\n\\n### **\\uff08W1\\uff09The Need for Depth Map Ground Truth**\\n\\nWe appreciate the reviewer's insights on the necessity of depth supervision. While depth supervision is not strictly required in our framework, we acknowledge that it can enhance the speed of model convergence and lead to improved final results. \\n\\nTo further clarify our findings, we conducted a comparative experiment to evaluate the impact of depth supervision under the experimental conditions detailed in Table 1. The results are presented below:\\n\\n| | PSNR | SSIM | LPIPS |\\n| --- | --- | --- | --- |\\n| Ours w/o depth | 22.23 | 0.63 | 0.24 |\\n| Ours | 23.70 | 0.68 | 0.17 |\\n\\nThe table indicates that our algorithm can still learn geometric information through temporal consistency, even in the absence of depth constraints. Furthermore, in driving scenarios, depth information from LiDAR is typically readily available, as discussed in other works (e.g., paper [1]).\\n\\n### **\\uff08W2\\uff09The Limitations of Semantic Segmentation**\\n\\nAlthough we now only segment cars, two-wheelers and pedestrians as dynamic objects. Here are some simple ideas for segmenting all dynamic objects. For instance, calculating optical flow from videos could help identify points in dynamic regions, which could then serve as prompts for a segmentation algorithm like SAM. This would enable us to segment a broader range of dynamic objects.\\n\\nWe believe that this limitation should not overshadow the value of our paper, as many advanced and well-regarded driving reconstruction algorithms share this same challenge [1, 2, 3, 4]. Addressing this issue is a minor aspect of our framework and does not reflect the core innovations presented in our work.\\n\\n[1] Drivinggaussian: Composite gaussian splatting for surrounding dynamic autonomous driving scenes. CVPR2024.\\n\\n[2] Holistic urban 3d scene understanding via gaussian splatting. *CVPR2024*\\n\\n[3] OmniRe: Omni Urban Scene Reconstruction.\\\" *arXiv preprint arXiv:2408.16760* (2024).\\n\\n[4] Street gaussians for modeling dynamic urban scenes. ECCV2025\\n\\n### **\\uff08W3\\uff09Lack of Efficiency Analysis**\\n\\nWe appreciate the reviewer pointing out the need for an efficiency analysis. We will include an analysis of the efficiency of our method in the paper. According to the evaluation protocol in Table 1, we compared the speed and PSNR of our method against traditional optimization methods:\\n\\n| | PSNR | SSIM | LPIPS | Time Cost |\\n| --- | --- | --- | --- | --- |\\n| 3D-GS | 24.91 | 0.71 | 0.16 | 5.5h |\\n| DrivingGaussian | 26.12 | 0.74 | 0.13 | 6.2h |\\n| Ours | 23.70 | 0.68 | 0.17 | 1.21s |\\n\\nAs shown in the table, our algorithm performs comparably to traditional optimization methods in terms of PSNR, while significantly reducing time cost. The experiment was updated to part C of supplementary materials. To prove the effectiveness of our reconstruction, we have provided some reconstruction videos where you can observe our method at the following link: https://anonymize58426.github.io/Drive-Recon/. In the videos, you can see the lane lines being translated between 3 to 12 seconds, and the viewpoint being rotated between 17 to 26 seconds. This demonstrates that the scenes reconstructed by our algorithm maintain geometric consistency.\"}", "{\"summary\": \"Unlike previous methods (e.g., 3DGS/NeRF) that require thousands of iterations to reconstruct a scene, this work aims to predict a 3D scene representation using a neural network.\\n\\n The authors make several design choices to make this pipeline work (PD-block, regularization, 3D encoding, etc.). \\n\\nExperiments conducted on Waymo demonstrate better performance compared to other generalizable approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. A generalizable and scalable approach that allows training of large models to learn priors from extensive data, generalizing to novel scenes.\\n2. **Almost** no 3D bounding box labels required for dynamic scenes, enhancing scalability.\\n3. Detailed explanations and extensive experiments on cross-data evaluation, downstream applications (data augmentation, pretrained model for perception, scene editing).\", \"weaknesses\": \"1. Overcomplicated design:\\n While I appreciate the effort in developing a generalizable model with dynamic-static decomposition, the model seems quite complex, requiring:\\n * Multiple modules (image encoder-decoder, temporal cross-attention, Gaussian adapter, PD block, etc.)\\n * Numerous regularization terms\\n * Several pretrained models (DepthNet, DeepLab, SAM)\\n\\n This complexity may hinder downstream applications when used as a pretrained model. For instance, how fast is the model? Is it efficient enough for use in autonomy systems?\\n\\n2. The realism is still lower compared to optimization-based approaches (e.g., 3DGS), and can only operate on low resolution (256x512) with a limited number of images.\\n\\n3. (Minor point) The writing seems somewhat rushed, lacking thorough proofreading. Some potential issues:\\n * L155, \\\"corresponding intrinsic parameter E\\\" should be K\\n * L414 \\\"evaluation on NOTA-DS6\\\" should be Diversity-54\", \"questions\": \"**Regarding efficiency and comparison with 3DGS**\\n\\nWhat is the computational cost to train the model (how many hours on 24 GPUs)? \\nHow long does it take to reconstruct a 3D scene representation using your approach during inference? How does the efficiency compare to 3DGS, e.g., StreetGaussian on 256x512?\\n\\nHow does the realism compare to 3DGS (e.g., StreetGaussian at 256 \\u00d7 512)? It's okay if it's worse; I'm just curious. \\n\\n\\n\\n**On 3D labels**\\nWhat is the performance without using 3D bounding boxes at all? I note that you use 3D bounding boxes as prompts for SAM. A label-free approach would make this work more impactful.\\n\\n**On downstream applications**\\nHow is UniAD implemented in Waymo? Would it be possible to conduct your experiments on nuScenes to follow the setting/implementation of UniAD?\\n\\n**Miscellaneous**:\\n* How many frames are in the input during training?\\n* In Table 4b, what does \\\"Training Num\\\" refer to? Do you mean number of scenes? The PSNR seems quite high compared to Table 3.\\n\\nSome questions may require additional experiments; please disregard if they're not feasible. However, I'm particularly interested in the efficiency and comparison with 3DGS.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a feed-forward 4D reconstruction method that generates 4D scenes from surround-view video inputs in a single feed-forward pass.\\nThe method involves 3D Position Encoding, Temporal Cross Attention, Gaussian Adapter, and Prune and Dilate Block. All these modules consist of the feed-forward 4D reconstruction pipeline. \\nThe PD-Block learns to prune redundant Gaussian points from different views and background regions and dilate Gaussian points for complex objects, enhancing the quality of reconstruction.\\nThis paper also presents rendering strategies for both static and dynamic components, enabling efficient supervision of rendered images across temporal sequences.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.This paper first explores a feed-forward 4D reconstruction method for surround-view driving scenes, which promotes the development of feed-forward technology in the field of 4D reconstruction.\\n\\n2.The proposed PD-Block learns to prune and dilate the Gaussian points and allows for Gaussian points that are not strictly pixel-aligned, which is innovative.\", \"weaknesses\": \"1.The training process requires depth map ground truth, whereas comparison methods like Pixelsplat and MVSpalt can be trained without it. This reliance on depth ground truth during training restricts its practical applicability.\\n\\n2.The dynamic objects are decomposed through segmentation and have only few categories (vehicles and people). This approach only separates dynamic and static pixels based on semantics, limiting its ability to achieve comprehensive 4D reconstruction of all dynamic objects.\\n\\n3.Compared to scene-optimized methods, feed-forward reconstruction provides the advantage of generalization, eliminating the need of test-time optimization for each new scene (though it may lead to some decrease in accuracy compared to the scene-optimized method). In the papers of comparing methods MVSplat and PixelSplat, both of them present running time and memory consumption, demonstrating the efficiency of their feed-forward approaches. However, in this paper, while the authors claim their method is feed-forward, they do not provide an analysis of its running time and memory usage. I recommend including this efficiency analysis and comparing it with other methods to strengthen the evaluation. \\n\\nBesides, if the authors believe that efficiency is not a concern of this paper, then comparisons with other offline scene-optimized methods (e.g., DrivingGaussian) should be included.\\n\\n4.If the possible application is to develop real-world simulators in autonomous driving (mentioned in the abstract of the paper), then there is no high requirement for the efficiency of reconstruction, and the existing offline scene-optimized 4D reconstruction method is also acceptable. However, feed-forward does not seem to have an advantage in terms of reconstruction accuracy.\", \"questions\": \"1.What does \\u201cDA-Block\\u201d in line 202 refer to? It is not mentioned in the context.\\n\\n2.Please refer to the questions and suggestions in the Weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion request\", \"comment\": \"Dear Reviewer 3kKi,\\n\\nI would like to express my sincere gratitude to you for your constructive comments. As the ICLR discussion phase is almost over, I wanted to kindly ask if there are any remaining questions or clarifications needed regarding our responses. Please feel free to reach out at any time.\\n\\nWe would be truly grateful if you could consider raising our rating, as your support is crucial for the potential acceptance of our work.\\n\\nBest wishes,\\n\\nAuthors\"}", "{\"title\": \"Author Response to Reviewer conn (Part 2)\", \"comment\": \"### **(Q1) Comparison to Optimization-Based Approaches**\\n\\nWe appreciate your inquiry regarding efficiency. We require only 20 hours to complete 50,000 iterations on 24 A100 GPUs. According to the protocol outlined in Table 1, we have compared the time cost and PSNR of our method with that of traditional optimization methods.\\n\\n| | PSNR | SSIM | LPIPS | Time Cost |\\n| --- | --- | --- | --- | --- |\\n| 3D-GS | 24.91 | 0.71 | 0.16 | 5.5h |\\n| DrivingGaussian | 26.12 | 0.74 | 0.13 | 6.2h |\\n| Ours | 23.70 | 0.68 | 0.17 | 1.21s |\\n\\nAs illustrated in the table above, our algorithm is nearly comparable to traditional optimization methods, while inference takes only 1.21 seconds, highlighting our method's substantial speed advantage. The experiment was updated to part C of supplementary materials.\\n\\n---\\n\\n### **(Q2) The Need for 3D Boxes**\\n\\nI am grateful for your question about the use of 3D boxes. Our method can indeed be trained without requiring semantic segmentation or 3D boxes. As shown in Table 4(a), we can learn geometry through perspective consistency and point cloud depth information without the need for Dynamic and Static Rendering (DS-R).\\n\\nAdditionally, we can use simple techniques for segmentation without utilizing 3D boxes. For example, we can compute optical flow from videos to identify points in dynamic regions, which can be used as prompts for the SAM to segment dynamic objects, enabling us to segment any type of dynamic object without 3D boxes.\\n\\n---\\n\\n### **(Q3) Downstream Applications**\\n\\nWe train DriveRecon on the nuScenes training set without relying on 3D boxes or segmentation ($\\\\lambda_{sr} =0$ and $\\\\lambda_{seg}=0$). The pretrained model is then employed as an image encoder for UniAD. Then, UniAD is fine-tuned entirely using the original UniAD's training parameters. The results for UniAD in Table 6 are sourced from the original paper, and our pretrained model significantly enhances performance. I will provide clearer details about the experimental setup in the the paper.\\n\\n---\\n\\n### **(Q4) Miscellaneous**\\n\\nThank you for your attention to our experimental setup. We utilized three frames of images as input for all experiments. In Table 4(b), \\\"Training Num\\\" refers to the mean number of scenes. Tables 1, 2, 3, and 4(a) use only 64 scenes (NOTA-DS64). Specifically, we trained with 64 scenes (NOTA-DS64) and tested with 54 new scenes (Diversity-54), achieving satisfactory results. This indicates that our algorithm demonstrates good generalization performance even when trained on a small dataset. We will reiterate these details in the appropriate sections of the paper.\\n\\n\\nThank you for your attention to our responses. If you have any further questions or if anything remains unclear, please don\\u2019t hesitate to let us know. We would be more than happy to discuss your concerns in greater detail.\"}", "{\"title\": \"Request for further feedback\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely apologize for troubling you once again, and we deeply appreciate the time and effort you have put into reviewing our submission. We responded further and look forward to discussing with you. As the ICLR discussion period has been extended, we still have approximately some days to continue the discussion. Please let us know if there are any additional points or concerns that you would like us to address before the discussion phase concludes. Your valuable feedback is highly appreciated and will help us further improve our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Discussion request\", \"comment\": \"Dear Reviewer conn,\\n\\nI would like to express my sincere gratitude to you for your constructive comments. As the ICLR discussion phase is almost over, I wanted to kindly ask if there are any remaining questions or clarifications needed regarding our responses. Please feel free to reach out at any time.\\n\\nWe would be truly grateful if you could consider raising our rating, as your support is crucial for the potential acceptance of our work.\\n\\nBest wishes,\\n\\nAuthors\"}", "{\"title\": \"Author Response to Reviewer conn (Part 1)\", \"comment\": \"### **(W1) Overcomplicated Design**\\n\\nThank you for your thoughtful comments regarding the complexity of our model. We believe that the design elements are essential for effective driving reconstruction for several reasons:\\n\\n1. **Necessity of Different Modules**: The 4D feedforward model requires the efficient integration of multiple perspectives and varying time intervals. To achieve this, we utilize temporal cross-attention to merge temporal information and implement the PD-Block for effective multi-view image fusion. Additionally, the GaussianAdapter facilitates the transfer of image features into a Gaussian representation.\\n2. **Importance of Regularization Terms**: At any given moment \\\\( t \\\\), the rendering supervision of the scene is limited by the sparse number of views. Moreover, the presence of multiple dynamic objects complicates monitoring at time \\\\( t \\\\). To address this, we decouple dynamic objects from static ones, allowing for better perspective utilization during the rendering process. Although this decoupling introduces numerous regularization terms, we regard them as necessary for achieving the desired outcomes.\\n3. **Significance of Pretrained Models**: Besides leveraging the SAM and DeepLab models, we acknowledge that more efficient methods exist for decoupling dynamic and static objects, which we discuss in detail in response to Q3. Importantly, DepthNet is not a pretrained network; it constitutes a part of our model that can be trained.\\n\\nOur findings in Table 4(a) demonstrate that these components are both valid and necessary. For downstream tasks, we utilize only the image encoder as the pretrained model, omitting the others to prevent any additional burden on training and inference. The model\\u2019s reconstruction inference speed is elaborated upon in (Q1).\\n\\nFurthermore, even widely recognized optimization-based reconstruction algorithms exhibit significant complexity in driving scenes [1, 2, 3, 4, 5]. They often integrate multiple annotations, regularizers, and neural network architectures.\\n\\n[1] Drivinggaussian: Composite gaussian splatting for surrounding dynamic autonomous driving scenes. *CVPR 2024*\\n\\n[2] Hugs: Holistic urban 3d scene understanding via gaussian splatting. *CVPR2024*\\n\\n[3] OmniRe: Omni Urban Scene Reconstruction. *arXiv preprint arXiv:2408.16760* (2024).\\n\\n[4] Street gaussians for modeling dynamic urban scenes. ECCV2025\\n\\n[5] Unisim: A neural closed-loop sensor simulator. CVPR2023.\\n\\n### **(W2) The limation of realism**\\n\\nCurrently, it appears that feedforward networks exhibit slightly lower rendering quality in driving scenarios compared to optimization methods. We have provided some reconstruction videos where you can observe our method at the following link: https://anonymize58426.github.io/Drive-Recon/. In the videos, you can see the lane lines being translated between 3 to 12 seconds, and the viewpoint being rotated between 17 to 26 seconds. This demonstrates that the scenes reconstructed by our algorithm maintain geometric consistency. I am confident that, in the near future, advancements in network architectures, training strategies, and generative techniques will enable feedforward networks to surpass optimization methods. This paper is a preliminary study at the field of driving and give a code base, which will accelerate the development of the field.\\n\\nFurthermore, existing state-of-the-art generalizable Gaussian splatting algorithms often operate at relatively low resolutions and realism in indoor scenes [1, 2, 3]. Therefore, our paper is still at an advanced level in the field of generalizable Gaussian splatting.\\n\\n[1] Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images. *ECCV* 2025.\\n\\n[2] Large Spatial Model: End-to-end Unposed Images to Semantic 3D. NeurIPS 2024 \\n\\n[3] FreeSplat: Generalizable 3D Gaussian Splatting Towards Free-View Synthesis of Indoor Scenes Reconstruction. NeurIPS 2024\"}", "{\"title\": \"Author Response to Reviewer EbAv (Part 2)\", \"comment\": \"### **\\uff08W4\\uff09The Necessity of a 4D Feedforward Network**\\n\\nWhile we acknowledge that the rendering quality of the feedforward network in driving scenes may currently be slightly lower than that of optimization methods, we are confident that feedforward networks will surpass optimization methods in the near future, especially with advancements in generative techniques. We would like to highlight several key points regarding the necessity of feedforward networks:\\n\\n1. **General Driving Pre-Training Model**: The 4D feedforward network serves as a general pre-training model for driving tasks. By learning 4D reconstruction, the network extracts geometric information from the entire scene while simultaneously predicting motion information for moving objects. This pre-training approach captures more geometric and temporal features than existing methods, as verified in Table 6 of our paper. Our model can serve as a pre-training tool to enhance the performance of driving algorithms in multiple tasks such as perception and planning without relying on labels like segmentation or 3D bounding boxes.\\n2. **Possibility of a 4D World Model**: Current driving world models often exist in video form without explicit geometric representation, leading to limitations in perspective and temporal consistency. Upgrading our reconstruction model to support predictable 4D generation could address these issues. A 4D world model could provide explicit geometry to constrain end-to-end planning, representing a significant advancement for the field.\\n3. **Scaling Capabilities**: The 4D feedforward network has the potential for scalability. It can be trained using an abundance of driving videos available on the Internet. This extensive training could greatly enhance reconstruction capabilities, which can then be leveraged for tasks such as perception and planning, either through pre-training or as part of the world model.\\n4. **Importance of Generalization and Efficiency**: Building a 4D scene for a new city using optimization methods is very time-consuming, particularly with long videos capturing thousands of scenarios. For example, a 200-frame video (10 seconds) might take around 6 hours for an optimization-based reconstruction method, which is not feasible in practice.\\n\\n### **\\uff08Q\\uff09Minor Typos**\\n\\nWe would like to thank the reviewer for catching the typo regarding \\u201cDA-Block\\u201d in line 202; it should indeed be \\u201cPD-Block.\\u201d We will proofread the paper thoroughly to enhance its writing, presentation, and layout.\\n\\nThank you for your attention to our responses. If you have any further questions or if anything remains unclear, please don\\u2019t hesitate to let us know. We would be more than happy to discuss your concerns in greater detail.\"}", "{\"metareview\": \"This paper proposes a driving scene reconstruction model, named DRIVINGRECON, which directly predicts 4D Gaussians from surround-view videos. While the idea is somewhat novel and offers a unique approach to scene reconstruction, the performance falls short compared to existing optimized methods. Additionally, the overall pipeline is excessively complex, which may hinder practical implementation and maintenance. Based on these strengths and weaknesses, the decision is not to recommend acceptance at this time.\", \"additional_comments_on_reviewer_discussion\": \"This paper was reviewed by three experts in the field and finally received marginal scores of 5, 5, and 6.\", \"two_major_concerns_of_the_reviewers_are\": \"1.\\tthe proposed method cannot achieve comparable performance relative to optimized methods,\\n2.\\tthe pipeline is excessively complex.\\nThe authors failed to address these two concerns during the discussion period. \\nI fully agree with these two concerns and, therefore, make the decision to reject the paper.\"}" ] }
0PC9goPpuz
Compatibility-aware Single-cell Continual Annotation
[ "Yuyao Zhai", "Liang Chen", "Minghua Deng" ]
As massive well-labeled single-cell RNA-seq (scRNA-seq) data are available sequentially, automatic cell type annotation systems would require the model to continuously update to expand their internal cell type library. However, the model could suffer from the catastrophic forgetting phenomenon, in which the performance of the model on the old tasks degrades significantly after it learns a new task. To enable the smooth upgrading of the system, the model must possess the ability to maintain performance on old tasks (stability) and adapt itself to learn new tasks (plasticity). We call such an updating process continual compatible learning. To adapt to this task, we propose a simple yet effective method termed scROD based on sample replay and objective decomposition. Specifically, we first maintain a memory buffer to save some cells from the previous tasks and replay them to learn together with the next incoming tasks. Then we decompose two different training objectives in continual compatible learning, i.e., distinguishing new cell types from old ones and distinguishing between different new ones, to avoid forgetting the model to varying degrees. Lastly, we assign distinct weights for two objectives to obtain a better trade-off between model stability and plasticity than the coupled approach. Comprehensive experiments on various benchmarks show that scROD can outperform existing scRNA-seq annotation methods and learn many cell types continually over a long period.
[ "Continual Compatible learning; Single-Cell RNA-seq data" ]
https://openreview.net/pdf?id=0PC9goPpuz
https://openreview.net/forum?id=0PC9goPpuz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "VG2vFCa7uC", "TDNIbMKfUT", "G9qh4NPxPT", "FuATXRMzsd" ], "note_type": [ "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730752482665, 1730338873690, 1730775132081, 1731432431222 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5465/Reviewer_JBpo" ], [ "ICLR.cc/2025/Conference/Submission5465/Reviewer_E1SE" ], [ "ICLR.cc/2025/Conference/Submission5465/Reviewer_RNHg" ], [ "ICLR.cc/2025/Conference/Submission5465/Authors" ] ], "structured_content_str": [ "{\"summary\": \"Authors propose an online learning approach scROD to annotate single cell RNA seq data. scROD uses a memory buffer and a new loss function to preserve classification performance on the past data while being able to annotate newly acquired data at the same time. Authors compare with several baselines and existing methods demonstrating the improvements in continual single-cell annotation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is clearly written and the experiments are clearly presented to back the claims made by the authors.\\n2. The problem is interesting since many single-cell RNA datasets have come up in the past few years. Online learning or transfer learning which ensures that the same network/fine-tuned networks can successfully annotate new data would be a strong contribution to the scientific community.\", \"weaknesses\": \"1. The idea to utilize a memory buffer is a widely-used idea in the reinforcement learning literature (Deep Q-Networks) and even continual learning literature (Gradient Episodic Memory). Therefore the core contribution is not technically novel. There is some novelty to decompose the loss function and consider the impact of different loss functions on the catastrophic forgetting issue in this setting but the results are fairly obvious. For example, when we are training on new datasets, we should ensure class balancing to ensure no classes are compromised which can be achieved with weighing loss function or sampling per class.\\n2. There are no past methods that specifically target the problem of continual learning but rather consider query and reference datasets, which I believe is a much harder problem with no supervision available on a query dataset. Therefore comparison with these methods is good to have but unfair to evaluate the utility of scROD.\", \"questions\": \"1. How is scROD different from GEM (Gradient Episodic Memory, Lopez-Paz, D., & Ranzato, M. A., 2017)? Can authors repurpose GEM and compare it with scROD? There are several follow ups for GEM for example \\\"Adaptive Memory Replay for Continual Learning\\\" from James et. al 2024 and \\\"MGSER-SAM: Memory-Guided Soft Experience Replay with Sharpness-Aware Optimization for Enhanced Continual Learning\\\" from Li et. al 2024 that could be compared against? Can authors theoretically and experimentally compare scROD with these approaches?\\n2. Since the manuscript tackles annotating scRNA-seq datasets, are there any practical limitations of this approach, can this be directly deployed by medical practitioners? There already exist many methods for annotation which do no assume access to supervision on query datasets, how did the authors consider online setting relevant to scRNA-seq dataset annotation? Is it possible to get small labeled samples on a query dataset? \\n3. Can you make accurate biological inferences from this method? Is it possible to identify genes which cause classification to a particular cell-type?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents scROD, a method designed to address the challenge of updating automatic cell type annotation models in single-cell RNA-seq (scRNA-seq) data while preventing catastrophic forgetting, where the model's performance on previously learned tasks deteriorates after learning new tasks. To tackle this, the authors introduce the concept of continual compatible learning, which emphasizes maintaining stability on old tasks while adapting to new ones. The proposed scROD method leverages sample replay by using a memory buffer to retain cells from earlier tasks, allowing the model to learn these alongside new tasks. It also separates two training objectives: distinguishing new cell types from old ones and differentiating between newly introduced cell types. By assigning distinct weights to these objectives, scROD achieves a balance between stability and adaptability.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. A thorough investigation of the continual cell type annotation task.\\n2. Provide comprehensive experimental benchmarks for the proposed method and baselines.\", \"weaknesses\": \"1. This manuscript focuses on one task: continual cell-type annotation. My main concern is that the importance of such continual annotation might not be very high. There exists some large-scale atlas and databases covering different species, like Human Cell Atlas, CELLxGENE, Mouse Cell Atlas, Zebrahub, and so on. Those resources cover a large range of tissues and provide cell type annotations. Some of them also curate the annotations with Cell Ontology. In most cases, a simple model pretrained on some atlas, such as CellTypist[1], can handle the annotation of unseen data. Can the authors provide concrete examples of scenarios where continual learning would be necessary or advantageous? What are the limitations of current approaches that continual learning specifically addresses?\\n2. Related to bullet 1, currently all the experiments only focus on continual cell type annotation. Have the authors considered evaluating their method on vanilla annotation or zero-/few-shot annotation tasks? How might the proposed method's performance compare to existing methods in these scenarios?\\n3. According to the experimental results, the performance of the proposed method is not always better than the baselines. If so, what factors contribute to these performance differences? How do the computational requirements of scROD compare to the baselines?\\n\\n[1] Dom\\u00ednguez Conde, C., et al. \\\"Cross-tissue immune cell analysis reveals tissue-specific features in humans.\\\"\\u00a0*Science*\\u00a0376.6594 (2022): eabl5197.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces scROD, a method for continual compatible learning in the context of single-cell RNA sequencing (scRNA-seq) data annotation. scROD employs a combination of sample replay and objective decomposition to address the challenge of catastrophic forgetting, where models typically lose performance on old tasks after learning new ones. By maintaining a memory buffer to store samples from previous tasks and replaying them alongside new data, scROD balances the retention of old knowledge with the acquisition of new information. Furthermore, it decomposes the training objectives into new/old cell type distinction and new cell type distinction, assigning different weights to these objectives to achieve a better trade-off between model stability and plasticity. This approach allows scROD to continuously learn and annotate new cell types over time without forgetting previously learned ones, demonstrating effectiveness through comprehensive experiments on various benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a novel framework dubbed scROD that combines sample replay and objective decomposition, addressing the critical issue of catastrophic forgetting in continual learning scenarios. scROD effectively balances the model's ability to retain old knowledge (stability) and adapt to new tasks (plasticity), which is crucial for continual learning systems.\\n\\n2. The paper evaluates scROD on a variety of benchmarks, including intra-tissue, inter-tissue, and inter-data scenarios, demonstrating its robustness across different annotation challenges. Besides, scROD outperforms existing state-of-the-art methods in scRNA-seq annotation, showing significant improvements in both old and new task accuracies.\\n\\n3. The article presents its findings with clear and concise figures and tables.\", \"weaknesses\": \"1. The innovation of this paper is quite common in continual learning, where many methods use replay buffer approaches to tackle catastrophic forgetting (e.g., [R1][R2][R3]). This paper does not show significant differences from those methods or specific distinctions for RNA data.\\n\\n2. The novelty of objective decomposition is intuitive and easy to understand, which leverage two parameters \\\\alpha_1 and \\\\alpha_2 to balance the optimization objectives.\\n\\n3. The experimental analysis of two learning objectives is trivial, since derivation of Eq. 7 is intuitive. Besides, it seems like that only using L_cur leads to better performance on previous tasks than L_pre, as shown in Figure 3. Thus, why do not just leveraging L_cur instead of L_pre?\\n\\n4. Although ScROD achieves SOTA performance in various settings, Tables 1, 2 and 3 show that ScROD achieves only a little bit higher than Replay, which is not the compelling evidence of the effectiveness of ScROD.\\n\\n5. In Figure 5, the first two experiments were performed on inter-data benchmark , and the last two on inter-tissue benchmark. Why not using the same benchmark for all the ablation study?\\n\\n\\n[R1] Maracani, A., Michieli, U., Toldo, M., & Zanuttigh, P. (2021). Recall: Replay-based continual learning in semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 7026-7035).\\n\\n[R2] Chaudhry, A., Rohrbach, M., Elhoseiny, M., Ajanthan, T., Dokania, P. K., Torr, P. H., & Ranzato, M. A. (2019). On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486.\\n\\n[R3] Riemer, M., Cases, I., Ajemian, R., Liu, M., Rish, I., Tu, Y., & Tesauro, G. (2018). Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv preprint arXiv:1810.11910.\", \"questions\": \"Please see the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
0OzDMjPHa3
Efficient Visualization of Implicit Neural Representations via Weight Matrix Analysis
[ "Jennifer Zvonek", "Andrew Gillette" ]
An implicit neural representation (INR) is a neural network that approximates a function over space and possibly time. Memory-intensive visualization tasks, including modern 4D CT scanning methods, represent data natively as INRs. While such INRs are prized for being more memory-efficient than traditional data on a lattice, discretization to a regular grid is still required for many visualization tasks. We present an algorithm to store high-resolution voxel data only for regions with significant detail, reducing memory requirements. To identify these high-detail areas, we use an interpolative decomposition pruning method on the weight matrices of the INR. The information from pruning is used to guide adaptive mesh refinement, allowing automatic mesh generation, tailored to the underlying resolution of the function. From a pre-trained INR with no access to its training data, we produce a variable resolution visualization with significant memory savings.
[ "Implicit neural representation", "pruning", "visualization", "adaptive mesh refinement" ]
Reject
https://openreview.net/pdf?id=0OzDMjPHa3
https://openreview.net/forum?id=0OzDMjPHa3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "k8tCzPmNWj", "NcDN7CD1ZQ", "MptPJo8beD", "IS1KtX6Zuz", "HsHKCippH3", "3tXY9qQEZY", "1oQVbxORVg" ], "note_type": [ "official_review", "official_review", "decision", "official_review", "official_comment", "official_review", "meta_review" ], "note_created": [ 1730228425255, 1731139453564, 1737524100039, 1730480114690, 1732227809954, 1730276781933, 1734442379084 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11053/Reviewer_YabU" ], [ "ICLR.cc/2025/Conference/Submission11053/Reviewer_YBEk" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11053/Reviewer_QKu8" ], [ "ICLR.cc/2025/Conference/Submission11053/Authors" ], [ "ICLR.cc/2025/Conference/Submission11053/Reviewer_1F7Y" ], [ "ICLR.cc/2025/Conference/Submission11053/Area_Chair_b8rg" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a novel method for visualizing implicit neural representations (INRs) via an adaptive grid evaluation. The core idea is to prune the neural network using an interpolation decomposition.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"I like the presentation. It is self-contained and properly presents the background. The paper is very easy to follow.\", \"The core idea is simple and easy to implement.\"], \"weaknesses\": \"* The evaluation should follow established procedures in the field. The paper uses datasets that are not usual for evaluation of similar approaches. I recommend the authors to read related papers in detail and use the datasets commonly used in the field. Example datasets include Thingi10K, Stanford, etc.\\n\\n* The technical contribution is thin. The algorithm proposed may be considered incremental since the interpolative decomposition used is not proposed by the paper. In such a case, I would recommend the authors to focus on finding additional applications and to deeply evaluate the approach. That tasks would help to find additional properties of the representation that may be emphasized in future versions, increasing the manuscript value.\\n\\n* There are no comparisons with state-of-the-art.\\n\\n* The related works section is very thin. INRs is a gigantic area. I would advise the authors to start by checking this survey to find the papers they should cite. It is a little bit outdated now, but it is a good starting point. \\n\\n```\\n@inproceedings{xie2022neural,\\n title={Neural fields in visual computing and beyond},\\n author={Xie, Yiheng and Takikawa, Towaki and Saito, Shunsuke and Litany, Or and Yan, Shiqin and Khan, Numair and Tombari, Federico and Tompkin, James and Sitzmann, Vincent and Sridhar, Srinath},\\n booktitle={Computer Graphics Forum},\\n volume={41},\\n number={2},\\n pages={641--676},\\n year={2022},\\n organization={Wiley Online Library}\\n}\\n```\", \"questions\": [\"The concept of mesh is a little bit misleading in the paper in my opinion. In the context of INRs, mesh is used to denote a surface represented by a triangle mesh. I think the correct term the paper should use is grid. That would solve other derived term problems. For example, an adaptive mesh is a concept established in Computer Graphics for decades, meaning a triangle mesh that may be subdivided or simplified as needed.\", \"How the coarse uniform mesh is extracted from the INR in the proposed approach?\", \"As algorithm 1 runs, there will be different versions of the INR matrices? Each pruning operation results in different layer matrices and the version used depends on which part of the domain is being evaluated.\", \"I need more details about how the pruning is applied in the algorithm in practice. $\\\\bar{U} := UT^T$ contains the complexity that was pruned from $W$ and $b$. In other words, the pruned parameters are moved to the next layer. However, all layers should be evaluated when the INR is evaluated, thus the computation complexity still the same in the end. Probably there is an additional step to disconsider $\\\\bar{U}$ that I did not find in the text.\", \"The meaning of ID_samples seems confusing. The paper first states that it is the number of samples in the domain to take when computing the interpolation decomposition (Table 1). However, in Algorithm 1 ID_samples seems to be the number of neurons to use for pruning.\", \"I would like to know the wall time to compute the visualization and how it compares with the non-adaptive visualization.\", \"Should use an usual metric to compare reconstruction (Chamfer or Hausdorff distance)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a new algorithm for visualizing implicit neural representations (INRs) through a pruning-based approach. The method determines the high-detail regions in pre-trained INRs and then uses adaptive mesh refinement to split up the domain, thus saving memory. The results show that the proposed algorithm can achieve comparable visualization accuracy while using fewer degrees of freedom than uniform grid discretization or basic AMR. However, the contribution is incremental, the presentation requires improvement, and the experimental section is weak.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The target problem, reducing cost in visualizing INRs, is meaningful.\\n2. The qualitative results show some improvement compared to AMR.\", \"weaknesses\": \"1. The novelty is unclear. The paper combines established techniques without adequately explaining the challenges or novel solutions provided.\\n\\n2. The experiment section is weak. \\nThe experimental results are limited; more comparisons with advanced visualization techniques for INRs would strengthen the evaluation.\\n\\n3. The paper's clarity and organization could be significantly improved.\", \"questions\": \"1 Terminology: Key terms such as \\\"domain\\\" and \\\"adaptive mesh\\\" need clearer definitions. Are there specific examples or illustrations that could be added?\\n\\n2 ID pruning\\n\\n2.1 What is the computational cost of ID? \\n\\n2.2 Why is the number of samples set to the width of the INR layers? \\n\\n2.3 How does this hyper-parameter impact the final performance? \\n\\n\\n3 There is no detailed discussion on the computational costs of the algorithm.\\n\\n4 The paper lists multiple hyperparameters but does not explain how they were chosen or their impact on the algorithm\\u2019s performance.\\n\\n5 Including comparisons with state-of-the-art INR visualization methods or adaptive algorithms would deepen the insights and show the algorithm's standing in the broader research landscape.\\n\\n6 Expanding experiments to larger datasets would better illustrate the scalability and robustness of the approach.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents an efficient method for visualizing Implicit Neural Representations (INRs) by adaptively refining only high-detail regions, identified through pruning of weight matrices. This approach maintains visualization quality with reduced memory use, as it avoids uniform discretization. Tests on CT scan data show it can achieve detailed visualization while significantly lowering computational demands, making it ideal for large-scale, dynamic data\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes a dynamic adaptation method for INR visualization that keeps high resolution in high-detailed region while reducing resolution in low-detailed region. This is efficient in memory saving, especially useful in large scale 3D/4D data visualization.\\n2. The combination of AMR and ID for variable resolution is interesting. It saves computational resources by avoiding computation on low-detailed region reconstruction.\\n3. It shows great potential in real-world application by the experiments in the CT dataset.\", \"weaknesses\": \"1. This paper lacks of theoretical analysis on how AMR and ID succeed in high-detail INR model visualization. Since AMR and ID have been widely studied and well developed. The combination is not a novel enough approach to this problem. Here are some points that I think important to analyze:\\n - It is necessary to explain how important information is preserved when ID pruning in INR, especially in high-detailed regions. Since the representational capacity of INR is directly related to the size of the weight matrix, a sufficient analysis on how ID pruning affects the reconstruction accuracy of INR is important.\\n - Pruning can impact the local approximation accuracy of the INR model, so it\\u2019s essential to analyze whether sufficient details can still be retained after pruning at various mesh resolutions. This aspect could be supported by a quantitative analysis on the relationship between pruning rate and error in different levels of mesh resolutions.\\n - AMR relies on local error criteria, but ID pruning may reduce reconstruction accuracy in certain regions, potentially missing some details if not properly controlled. Therefore, it is necessary to analyze the impact of pruning on AMR\\u2019s local error estimation.\\n1. This paper gives a preliminary experiment and 2 CT experiments. The datasets are simple, not enough to support the efficiency of their algorithm. I suggest doing experiments on some medical CT datasets, e.g. [LUNA16](https://luna16.grand-challenge.org/Data/).\\n1. The authors show the influence of the hyperparameters to the results in their experiments, but this discussion is not enough. Across the 3 experiments, the hyperparameter $T$ varies from $10^{-4}$ to $10^{-1}$, $\\\\epsilon$ varies from $10^{-3}$ to $10^{-2}$. The range is too big for users to find a set of useful settings. Are there any guiding rules on how to choose the hyperparameters with respect to the dataset? It's also unclear to me if the choice of $T$ and $\\\\epsilon$ affects each other. I would suggest a more comprehensive ablation study on the choice of the hyperparameters $T$, $P$, and $\\\\epsilon$.\\n1. This algorithm uses ID iteratively. I wonder if the computational cost will increase exponentially when it comes to the high dimensional dataset or large scale dataset? I suggest the authors giving a time complexity analysis with respect to the dataset scale and the dimensionality. The authors could also provide the runtime results on larger datasets (e.g. [LUNA16](https://luna16.grand-challenge.org/Data/)) if possible.\\n\\nBesides, there are some minor issues:\\n\\n5. There is a misspelling in the last sentence of **INPUT** in algorithm1, it should be \\\"to\\\" instead of \\\"ot\\\"\\n6. In algorithm1, the confition of the second for loop says M.E.done_refining == False, but I can't find anywhere that sets it false in the algorithm.\\n7. There are too many long sentences that take up to 3 lines. I would suggest breaking them down for reading.\", \"questions\": \"In this paper, authors propose a hypothesis that *the less detailed a function is on a region of the domain, the smaller an INR needs to be to accurately describe the function in that region*. Is there any verification on this hypothesis? For example, the relationship between function details and the INR size across different levels of detailed regions. The authors could provide some empirical results on the appendix. It's important to give a comprehensive verification since it is the foundation of the whole paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you to all the reviewers for their detailed comments. We will take your suggestions into consideration as we prepare to revise this paper for submission to a different forum.\"}", "{\"summary\": \"This paper addresses the challenge of efficiently visualizing implicit neural representations (INRs), which are well-suited for storing high-resolution data like time-varying volumetric data. Traditional approaches typically discretize INRs to a uniform grid, leading to a loss of the inherent advantages of INRs. To tackle this, the paper introduces an algorithm that generates an adaptive mesh by pruning the weight matrices of the INR. The key insight is that areas with low variation in the INR can tolerate more aggressive pruning than highly variable regions, enabling the mesh to be refined and adapted. This approach aims to maintain the INR's resource efficiency, even in visualization contexts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper presents a compelling approach by incorporating pruning techniques into the generation of adaptive meshes based on implicit neural representations (INRs). This is an innovative idea that effectively leverages the strengths of INRs, making the visualization process more efficient and resource-conscious. The introduction of a method to visualize INRs adaptively addresses an important gap, and it highlights the potential of INRs to be used more broadly and effectively in high-resolution data applications. This direction holds promise and warrants further exploration to fully harness the benefits of INRs in visualization and beyond.\", \"weaknesses\": \"The paper introduces an innovative idea with significant potential, and the research direction it proposes opens exciting new avenues for leveraging INRs directly during visualization. However, despite these strengths, the paper does not feel fully 'finalized' for publication. There are several areas that would benefit from further development to strengthen its contribution. For details see below.\\n\\nFirst, although the adaptive mesh generation from INRs is well-motivated, alternative data structures commonly used to store high-resolution data are not evaluated, and comparisons with these could provide additional insights. Additionally, the term 'visualization' may be somewhat misleading, as the method centers on adaptive mesh generation rather than actual rendering of INRs, and lacks a concrete approach for efficient visualization. Please consider defining the term \\\"visualization\\\" in your application more concretely.\\n\\nThe choice of 'Basic' as a baseline is also not well-justified, and the high-level presentation of the methodology makes it challenging to fully understand the workings of the approach. Within the paper I only found a short paragraph describing the \\\"Basic\\\" algorithm (l. 230-235). A more detailed description, also describing the motivation of why the authors chose this baseline would help the paper.\\n\\nWhile the use of pruning in adaptive mesh generation is interesting, the paper could benefit from a stronger motivation for choosing pruning specifically as an optimization technique. Could the authors provide more explanation or justification for their choice of pruning as an optimization technique?\\n\\nFurthermore, an analysis of storage requirements for INRs versus the adaptive mesh is missing; comparing these could provide an insightful 'upper baseline' for memory efficiency. Since INRs can be directly rendered by multiple function evaluations (albeit slowly), it would be valuable to include a performance evaluation of this approach in comparison to the proposed method, especially in the context of interactive visualization. Finally, a time-based evaluation (e.g., comparing pruning-based adaptive mesh refinement versus the Basic approach in mesh construction) would give a more comprehensive view of the method\\u2019s efficiency.\", \"questions\": [\"How does the proposed adaptive mesh generation from INRs compare with other data structures traditionally used for storing high-resolution data?\", \"Why was the \\\"Basic\\\" approach chosen as the baseline?\", \"What is the motivation for selecting pruning as the primary optimization technique, specifically for adaptive meshing of INRs?\", \"How do the storage requirements of INRs compare to those of the generated adaptive mesh, and could a comparative analysis be provided? And generally how do drectly visualizing the INR compare to the adaptive mesh?\", \"Would the authors provide a time-based evaluation comparing the efficiency of pruning-based adaptive mesh refinement against the Basic approach for mesh construction?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Though the authors did not withdraw the paper, it seems from their comment\\n\\\"Thank you to all the reviewers for their detailed comments. We will take your suggestions into consideration as we prepare to revise this paper for submission to a different forum.\\\"\\nthat they decided to withdraw the paper. In addition, the reviewers universally agree to reject the paper because of the lack of the contribution and comprehensive analysis and experiments.\", \"additional_comments_on_reviewer_discussion\": \"There is no discussion in the rebuttal phase, so no comments on this part.\"}" ] }
0OTVNEm9N4
Rethinking Artistic Copyright Infringements In the Era Of Text-to-Image Generative Models
[ "Mazda Moayeri", "Sriram Balasubramanian", "Samyadeep Basu", "Priyatham Kattakinda", "Atoosa Chegini", "Robert Brauneis", "Soheil Feizi" ]
The advent of text-to-image generative models has led artists to worry that their individual styles may be copied, creating a pressing need to reconsider the lack of protection for artistic styles under copyright law. This requires answering challenging questions, like what defines style and what constitutes style infringment. In this work, we build on prior legal scholarship to develop an automatic and interpretable framework to \emph{quantitatively} assess style infringement. Our methods hinge on a simple logical argument: if an artist's works can consistently be recognized as their own, then they have a unique style. Based on this argument, we introduce ArtSavant, a practical (i.e., efficient and easy to understand) tool to (i) determine the unique style of an artist by comparing it to a reference corpus of works from hundreds of artists, and (ii) recognize if the identified style reappears in generated images. We then apply ArtSavant in an empirical study to quantify the prevalence of artistic style copying across 3 popular text-to-image generative models, finding that under simple prompting, $20\\%$ of $372$ prolific artists studied appear to have their styles be at risk of copying by today's generative models. Our findings show that prior legal arguments can be operationalized in quantitative ways, towards more nuanced examination of the issue of artistic style infringements.
[ "evaluating copying", "copyright", "generative ai", "text-to-image", "ai art", "law", "interpretability", "social impact" ]
Accept (Poster)
https://openreview.net/pdf?id=0OTVNEm9N4
https://openreview.net/forum?id=0OTVNEm9N4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wqQlPgUd6c", "vwIy7IJL7X", "oIGIowQc1J", "l7vy8GCSrU", "i81cmegAwS", "fq8rhKzhUO", "ZYChGzYUnN", "WFdPIb61G6", "VlfF7dYRMV", "VBi2Ntinej", "Nm5F1ioVu8", "HwnXXKuCR1", "CnHCVLiDw3", "CFy5oBFtpA", "BPIYZXRkxA", "ASsR82AtbN", "7H7LfLRx7i", "5n5WOL1NNh", "4ujkl1dQV3" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732661658247, 1730004889055, 1732323640180, 1730834856820, 1732547971045, 1734833457326, 1729989881960, 1732546399086, 1732661525076, 1732550615896, 1732664681211, 1732672969289, 1732501255900, 1737523953362, 1732666555276, 1732641805054, 1731142931888, 1732320213680, 1732500875395 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8993/Authors" ], [ "ICLR.cc/2025/Conference/Submission8993/Reviewer_ZD9A" ], [ "ICLR.cc/2025/Conference/Submission8993/Authors" ], [ "ICLR.cc/2025/Conference/Submission8993/Reviewer_HLbq" ], [ "ICLR.cc/2025/Conference/Submission8993/Reviewer_ZD9A" ], [ "ICLR.cc/2025/Conference/Submission8993/Area_Chair_N4CW" ], [ "ICLR.cc/2025/Conference/Submission8993/Reviewer_yjz4" ], [ "ICLR.cc/2025/Conference/Submission8993/Authors" ], [ "ICLR.cc/2025/Conference/Submission8993/Reviewer_yjz4" ], [ "ICLR.cc/2025/Conference/Submission8993/Authors" ], [ "ICLR.cc/2025/Conference/Submission8993/Authors" ], [ "ICLR.cc/2025/Conference/Submission8993/Authors" ], [ "ICLR.cc/2025/Conference/Submission8993/Reviewer_yjz4" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8993/Authors" ], [ "ICLR.cc/2025/Conference/Submission8993/Authors" ], [ "ICLR.cc/2025/Conference/Submission8993/Reviewer_gWF2" ], [ "ICLR.cc/2025/Conference/Submission8993/Authors" ], [ "ICLR.cc/2025/Conference/Submission8993/Reviewer_yjz4" ] ], "structured_content_str": [ "{\"comment\": \"Certainly :) appendix H starts on page 30. We'd be happy to answer any follow up questions on the user study here as well.\"}", "{\"summary\": \"This paper explores a significant question of how GenAI might infringe upon the styles of individual artists and if legal frameworks could protect these styles. In particular, the author developed a tool, ArtSavant, to measure and quantify artistic style infringement. ArtSavant mainly utilizes two methods:\\n* DeepMatch: aneural network classifier to establish a recognizable \\\"signature\\\" for an artist's style based on images. \\n* TagMatch: An interpretable, tag-based method, which decomposes artworks into stylistic elements or \\\"tags\\\".\\n\\nTheir empirical results show that GenAI models have the potential to reproduce unique artistic styles, rasing copyright concerns.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and addresses a timely, important problem relevant to today\\u2019s AI and creative industries. The authors provide a solid combination of qualitative and quantitative results that contribute valuable insights into the field.\\n2. Considering \\u201cstyle\\u201d as a central focus is an innovative approach. By shifting from image-wise similarity detection to a style-based classification specific to individual artists, the paper redefines the task in a way that offers a deeper understanding of style infringement.\\n3. The paper also emphasizes interpretability through the TagMatch method, which is especially useful in legal contexts, where clarity on how stylistic similarities are identified can support arguments around style infringement.\", \"weaknesses\": \"1. Although I enjoyed reading this paper and find \\u201cstyle\\u201d to be an intriguing approach to this problem, I am concerned about the inherent ambiguity surrounding this concept. The paper assumes that \\u201cstyle\\u201d can be quantitatively defined and detected, yet style is fundamentally a qualitative and fluid concept, often shaped by subjective interpretation. Additionally, even in the real world, many artists have very similar \\u201cstyles,\\u201d which complicates the notion of unique stylistic signatures.\\n\\n2. I wonder how a similarity-based method would perform on this dataset (please correct me if I missed this comparison in the paper). Are there cases where the style-based method detects something that a similarity-based method does not, or vice versa? A direct comparison could provide clearer insights into the advantages and limitations of each approach.\\n\\n3. Regarding TagMatch, I understand its goal of enhancing interpretability; however, I find it somewhat limited in scope. First, it\\u2019s a relatively naive approach in some respects, relying solely on zero-shot CLIP with predefined tags. Second, \\u201cstyle\\u201d implies something more subtle and nuanced than broad artistic categories. Even within the same category, there can be vast differences between artworks, so I\\u2019m unsure of TagMatch\\u2019s practical utility in capturing the deeper, unique aspects of an artist\\u2019s style.\", \"questions\": \"1. Could you provide more quantitative and qualitative discussions for similarity-based vs. style based method\\n2. would appreciate any further clarifications regarding my concerns about Weaknesses. And I am willing to raise my score if I find them convincing\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Comparisons to prior work -- novelty is in the framework and application of the components -- and answers to questions\", \"comment\": \"**On comparison to prior work.** We emphasize that our central contribution is our framework \\u2013 leveraging classification over image sets to quantify the uniqueness of an artistic style and the degree to which it is copied \\u2013 more so than the implementation of it. This is particularly true for DeepMatch. As noted by both the reviewer and ourselves in the paper, others have trained art classifiers before, but these classifiers have not been used to study whether or not unique artistic styles exist and if they are copied, with an explicit goal of aiding legal decisions. This goal manifests in our prioritization of ease-of-use (our classifiers train in minutes) and interpretability (via TagMatch), which previous art classifiers lack. Nonetheless, we compare to multiple alternate implementations in Appendix C. Importantly, while individual accuracy numbers vary slightly, the relative \\u2018recognizability\\u2019 (accuracy per artist) is highly correlated across different classifiers. This leads us to believe that \\u2013 to your question \\u2013 \\u2018models fail in similar ways\\u2019, though we\\u2019d interpret low classification accuracy for an artist not as a \\u2018failure\\u2019, but instead as a signal suggesting that the artist\\u2019s style is not particularly unique.\\n\\nAs for comparing the interpretability of TagMatch with CBMs, a human study would be required, which is out of scope, as this paper\\u2019s focus is its framework and findings, instead of any individual component. However, our claim on the potentially enhanced interpretability of TagMatch is based on prior human studies that show the importance of concise explanations in order for users to find them helpful: [1] argue a strict upper bound of 32 concepts before humans can no longer make use of them. The CBM baseline\\u2019s interpretability consists of similarity coefficients for 260 concepts. Making this 20% sparse with an l1 penalty drops accuracy, and still requires inspection of over 50 concept similarities, which may be cumbersome for a jury. In contrast, our method\\u2019s tag signatures are no longer than ten tags long, and also come with visual evidence due to TagMatch\\u2019s inherent attributions.\\n\\n[1] Ramaswamy et al, Overlooked Factors in Concept-Based Explanations: Dataset Choice, Concept Learnability, and Human Capability, CVPR \\u201823\\n\\n**Answers to questions**:\\n- **The goal is not to create an automated system for copyright judgments so that humans do not need to look. We specifically design our system so that humans have more to look at, so that they can make more informed judgments**. We believe our goal \\u2013 aiding humans in what is a very nuanced question \\u2013 is directly in line with and inspired by Sobel\\u2019s scholarship on what factors (extrinsic/analytic and intrinsic/holistic) are to be considered in determining copyright infringement. \\n- We cannot answer the question of how aligned our tool\\u2019s outputs would be with judgments from actual juries because of the novelty of the issue at hand. **There is very limited, if any, precedent for decisions on artistic style, so we do not have sufficient ground truth to compare against**. Thus, there is no clear criteria that would define a \\u2018failure\\u2019 of our tool.\\n- One next step is doing human studies to gain insight as to how do real-life jurors feel about our tool and how the tool can be improved to better help jurors in making their decisions. TagMatch also opens up a line of research in interpretable and attributable by design classifiers, which can be studied in greater depth. \\n\\nLastly, we note that **we are in the midst of conducting a human study to further validate our assessments of style copying**. Thank you for this suggestion! We will report our results as soon as we obtain them. \\n\\nLooking forward to your feedback on our initial rebuttals. And apologies for the late Friday night posting -- drafting our legal history took a bit of time.\"}", "{\"summary\": \"This paper introduces ArtSavant, an explainable classifier for identifying artistic style infringements in generated art. The proposed framework consists of DeepMatch, a black-box neural classifier, and TagMatch, an interpretable tag-based method, to quantify the uniqueness of an artist\\u2019s style and recognize if it appears in generated images. The central idea is that if an artist\\u2019s works are consistently recognizable, they contain a unique style that can be classified. The approach uses both holistic and analytic style comparisons. It combines CLIP embeddings and tagged stylistic elements to support style infringement claims in a legally relevant, interpretable manner.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. TagMatch offers an interpretable method for identifying stylistic elements, making it particularly valuable in legal contexts where explainability is essential.\\n\\n2. The paper includes a comprehensive evaluation of the proposed methods, including both quantitative and human evaluation.\", \"weaknesses\": \"1. TagMatch relies on LLMs to generate concept vocabularies, which may limit its effectiveness for less-known artists whose stylistic elements may not be well-covered in pretraining data. Could you show how TagMatch performs on less-known artists? If there are some gaps between known and well-known artists, I am curious if there is way to enhance the vocabulary to better capture these unique styles?\\n\\n2. DeepMatch uses a back-box for detection. However, such black-box classifiers may pick up on spurious details rather than genuine stylistic features. For example, if an artist always includes a certain animal in his art works, DeepMatch might use this feature to classify the style. Could you provide some evidence that DeepMatch\\u2019s classification is based on broader stylistic elements instead of just this minor feature?\\n\\n3. The preliminary study uses DINO features, which might be limited in representing stylistic nuances. Could you explore using features that are specifically trained for style similarity [1] to compare with your method as a baseline? What is the pro and con for classifier based approach proposed in this paper and embedding based approach? \\n\\n\\n4. The authors noted that a new artist could easily retrain the detector to include their works for the DeepMatch approach, as it\\u2019s quite efficient. However, I\\u2019m curious about the potential impact on performance. Does retraining lead to issues like catastrophic forgetting of previously learned styles? It would be interesting to see a case study where the existing classifier is expanded to include new artists, observing how this affects both new and original classifications.\\n\\n[1] Unsupervised Image Style Embeddings for Retrieval and Recognition Tasks\", \"questions\": \"1. In line 21, there should be a space between the method name and the next word.\\n\\n2. How many training examples from one artist are required to reliably detect the style of that single artist in DeepMatch? \\n\\n3. Do DeepMatch and TagMatch provide different predictions for certain examples? If so, in what situations does this occur, and what are the characteristics of these artworks that lead to differing predictions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response and for incorporating the experiment. The inherent ambiguity of the question presents challenges and limitations. **I still have some concerns about whether style or art can truly be quantified and identified in this way, as it may be overly naive, especially in a legal setting.** However, I do appreciate that the authors have made a solid attempt. While the method may not fully resolve the question, it represents meaningful progress and provides interesting insights. As a result, I have updated my score accordingly.\"}", "{\"metareview\": \"The authors propose to quantify the degree of artistic style infringement by measuring the extent to which an automated classifier can recognize the style of that artist. This is a nice, simple idea that attempts to tackle a highly topical and otherwise thorny problem, and I appreciate the authors' attempts to bring clarity and attention to this issue. The initial version of the paper was unclear about the legal positioning it was adopting as well as some baselines, but these issues were addressed in the rebuttal. After discussion, reviewers were unanimous in recommending acceptance and I am happy to recommend acceptance as well.\\n\\nI encourage the authors to incorporate the reviewers' feedback into their camera-ready. In particular, please be careful about the legal claims made in the paper. They are significantly less controversial after revision, but I note that the revised introduction still suggests that, e.g., automatic quantitative approaches (and specifically, your particular automatic quantitative approach) could be used entirely to make a legal judgement.\", \"additional_comments_on_reviewer_discussion\": \"The most substantive changes were around what Reviewer yjz4 suggested wrt legal framing and baselines. These were satisfactorily addressed in the rebuttal.\"}", "{\"summary\": \"The paper aims to develop an \\u201cintuitive, automatic, legally-grounded\\u201d approach to determine style infringement. To do this, it trains two classifiers: one \\u201cDeepMatch,\\u201d a traditional image classifier trained to classify 372 artists based on a WikiArt training set, and a second, \\u201cTagMatch,\\u201d which classifies artist styles using a human-interpretable tag-matching heuristic on top of more than 100 CLIP-derived tags across 10 aspects of artistic styles. Finally it conducts a measurement of images generated by popular diffusion models to quantify the number that resemble an artist\\u2019s style according to DeepMatch, and generates some explanations using TagMatch.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The problem area is important, with image synthesis models creating potentially huge economic impacts for the artistic professions. There is a need for scientific analysis to help guide discussions about implications for copyright and copyright law. Quantifying the amount of style imitation in the large models is a worthy goal. And in developing its methods, the paper recognizes the importance of interpretable explanations when it comes to human arguments.\", \"weaknesses\": \"[I have revised this score upwards based on revisions and discussions during rebuttal period; these are concerns having to do with the original submission]\\n\\nThe paper is not ready for publication in ICLR. There are several specific problems\\n\\n1. The legal discussion is not well-supported, and it is not sufficiently pedagogical for this audience.\\n2. The suitability of the classifiers for the task is not sufficiently measured or justified or compared to previous technical work.\\n3. The evaluation of style imitation in diffusion models does not support its conclusions.\\n\\nThe legal discussion is the most serious problem. For the technical audience at ICLR, a paper discussing legal issues must play a tutorial role. The audience at the conference is technical and not made of legal experts, so when making claims about legal issues, it is especially important for information about legal reasoning to be representative, explanatory, and correct. In contrast, this paper seems to be advancing an adventurous legal viewpoint in a venue with a legally unsophisticated audience.\\n\\nSpecifically, in the section on the legal framework, the paper puts in boldface: \\u201c**the task of showing the existence and uniqueness of artistic styles can be reduced to classification** \\u2013 something deep networks are particularly adept at doing.\\u201d That claim appears to contradict the paper\\u2019s own legal citations. For example, when contemplating legal tests for style infringement, the cited work by Sobel makes a distinction between \\u201cextrinsic\\u201d similarity that can be described in words as opposed to an \\u201cintrinsic similarity\\u201d which is perceived by humans but not easily described in a mechanical way. Sobel illustrates the distinction by surveying many subtle situations that have appeared in case law. In the perspective provided by Adobe\\u2019s proposed style infringement legislation, the test is not pinned on the output, but rather, the intent of the AI user is placed at the center of style infringement. Both of these legal perspectives seem to be at odds with paper\\u2019s proposed reduction of the style infringement test to an automated and mechanical artist-identification image classification problem. Neither of these central legal issues are surfaced to the ICLR reader: the paper omits any contemplation, measurement, or comparison to the intrinsic judgements that would need to be made by a jury, nor does it make any measurement, prediction, or discussion of intent by the user of the AI.\\n\\nThis reviewer strongly feels that ICLR should not be the place to advance a new legal theory. Plenty of scientific questions arise in the legal discussions, such as whether automated methods might be able to anticipate the judgement of a jury (and if not, why not), or whether the intent of the user can be correctly guessed by an automated method. At ICLR it would be more appropriate for the paper to pose and investigate a scientific question, and should not lead with a novel legal theory in boldface.\\n\\n\\n\\nOn the suitability of the classifier. More careful comparisons to previous work are needed. Several previous works have focused on the style classification such as Karayev and van Noord cited in footnote 2. However, the current work does not attempt to compare its approaches to any previous approaches, and it does not build on top of any of the evaluation approaches. For example van Noord takes on artist identification using the \\u201cRijksmuseum Challenge\\u201d and analyzes and breaks down failed predictions. Do the proposed classifiers work better or worse than van Noord? Do they fail in similar ways? What is different in the technical approach that might lead us to expect the classifiers are more suitable? Another insufficient comparison is between TagMatch and Concept Bottleneck Models. Table 1 in the appendix does a single pair of comparisons but does not quantify the sparsity advantage of TagMatch, or give any systematic comparison of meaningfulness to humans. The heuristic in TagMatch seems ad-hoc: why would we expect its sparse set of labels to be more meaningful to a jury than the ones provided by CBM? No evaluation on that is done.\\n\\n\\n\\nOn the evaluation of existing style copying. The paper\\u2019s conclusions are not sufficiently supported. The paper\\u2019s analysis of output from Stable Diffusion and OpenJourney concludes that most of the artist styles are not copied accurately, identifying just 16 of 372 artists whose styles are copied strongly. However, no triangulation is done on this measurement, so it is unclear whether the low rate of identification is due to a weakness in the classifier, or whether it is due to a lack of style-imitation in the diffusion models. A human evaluation on generated art style imitation could be done to check the estimate. Or larger-scale data resources could be used, for example, the \\u201cparrotzone.art\\u201d project has identified several thousand styles that SD copies well, and these could potentially be used as an independent source of human assessment of style similarity.\", \"questions\": \"Is the goal of the paper to create an automated system to make quick judgements about style infringement without requiring humans to look?\\n\\nDoes your goal contradict Sobel's view that these judgments are not possible to articulate in clear categories, and that they are inherently the province of a human jury to make?\\n\\nTo what extent do you believe that your system matches the style-infringement judgements that a jury would make \\\"by hand\\\"?\\n\\nWhat kinds of failure cases does the system have? What patterns characterize these failures?\\n\\nIf another scientist wishes to improve upon your system, what measurement can they do, that would indicate that they have made an improved system?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"highlight: new quantitative comparison of our method to image-sim methods, w intuitive explanation\", \"comment\": \"Thank you for your insightful comments! We apologize for extenuating circumstances leading to delay in our response. We address comments and discuss a new experiment below.\\n\\n**Ambiguity of style**. You're right! Style is qualitative, and this precisely is what makes questions about copyright protection for styles very challenging. Our work helps clear up some ambiguity around style by taking a quantitative approach based on legal scholarship and an intuitive logical argument (i.e. an artist has a unique style if their work can be consistently recognized). With this framework and central idea, we study key relevant questions, leading to new evidence suggesting that many artists have unique (recognizable) styles, and that generative AI may copy some of these styles. Thus, despite the ambiguity of style infringement, it requires study, as some artists may already be at risk.\\n\\nWe also find evidence of some artists with very similar styles \\u2013 exactly as you mention! \\u2013 using our method (Appendix B.1), demonstrating the utility of our framework. Other specific ways in which our tool can help human decision makers include (a) surfacing the most similar artists to a plaintiff artist (based on whose work is most frequently misclassified to the plaintiff or vice versa), along with generated art, so that a jury can have more evidence to decide if the AI generations pose an unprecedented level of style similarity; (b) presenting side-by-side sets of art that match a tag signature, with elements comprising that style articulated via TagMatch. In summary, our tool provides quantitative insight to key questions around style and style infringement, toward helping the human decision makers (judges, juries, lawyers) navigate the nuance/ambiguity you mention.\\n\\n**Deeper comparison to image-similarity methods**\\n\\nWe now present a new experiment comparing our method to baselines like CLIP and CSD (a version of CLIP fine-tuned to better measure style similarity; Somepalli et al ECCV '24). We test how well each method can retrieve images from a fixed reference set of artworks based on similarity to a query artwork. We take the train/test split of our dataset serve as reference/query sets for retrieval. We count each retrieved artwork created by the same artist as the query as a true positive, and if created by a different artist as a false positive. As DeepMatch is not intended for retrieval (unlike the baselines), we repurpose it's classifier for retrieval by using the output softmax probabilities for an image as an encoding of that image (i.e. each image is represented by a 372-dimensional vector, where the $i^{th}$ element corresponds to the likelihood that the image is authored by artist i). \\n\\nIn Fig. 11 of App. D.3 (new), we plot the distribution of AUROCs over the artists in our dataset for the baseline methods and DeepMatch. We find that CSD outperforms CLIP in this task (mean AUROC of 0.89 vs. 0.86), and DeepMatch performs best of all (mean AUROC 0.98).\\n\\nIntuitively, our improved performance can be linked to the original objective of the three methods considered. Image similarity methods are not equipped or intended to measure stylistic similarity, but rather only a general sense of similarity between two images. Even methods such as CSD which are specifically trained using a contrastive loss to be invariant to style preserving transformations are not aware of the components that constitute a \\\\emph{unique} artistic style. That is, \\\\emph{while CSD is trained to contrast general art styles (e.g. *impressionism* from *cubism*), it is not trained to contrast between two (potentially very similar) artists (e.g. ***Manet's** impressionism* vs. ***Monet's** impressionism*). From a copyright perspective, we are precisely interested in only those stylistic components that set an artist\\u2019s work apart from their peers - that is, a unique artistic signature. This naturally suggests a framework which analyzes style from the lens of image classification. Our method, DeepMatch, is trained to classify artworks and thus upweights stylistic features which are most unique and thus useful for classification. \\n\\n**Composing tags makes them more specific, better capturing the nuance required to distinguish artists**. Precisely as the you note, each individual tag is insufficient to define a unique style, as artworks sharing one tag can be vastly different from one another. This is why tag composition is necessary: we observe that while each individual tag does not define any artist\\u2019s unique style, combinations of tags can form signatures, where only one artist frequently uses all the tags in the combination together (see Fig. 6). While TagMatch employs a zero-shot multi-label tagging scheme with CLIP, the true novelty of it comes in this tag composition step and how classification is ultimately performed (via look-up of tag signatures), which makes it interpretable and attributable by design. \\n\\nWe'd love to answer any follow up Qs! Thank you.\"}", "{\"title\": \"Help navigating changes\", \"comment\": \"Thanks - a user study validating that DeepMatch correlates with human judgment would address a couple of my key concerns.\", \"minor_request\": \"can you help me find the results in the paper - I'm having trouble navigating the changes without redlines - at which lines or sections should I look for the writeup of the user study?\\n\\nI notice the new footnote in the introduction which is helpful.\"}", "{\"title\": \"brief update: new comparison w CSD; human validation in progress\", \"comment\": \"Thank you very much for your continued engagement in discussing our work! We appreciate the time you're taking and the specificity and constructiveness of your suggestions.We wanted to provide a brief update before adding another comment + incorporating more suggestions later.\\n\\nFirst, **our human validation is on-going**, and we'll share the results as soon as we have them. We hope to show more evidence that humans agree with instances where the style of generated art for an artist is deemed not sufficiently similar to real art by that artist, according to our method. We note this task is not very easy for humans, as they need to have a vast knowledge of existing art to contextualize and better assess if a matched style in generated art is *unique* to the potentially copied artist. Fortunately, our method can help surface relevant art to aide the human in making this decision, which we will use in the set up of our human eval.\\n\\nWe have also added a **new experimental comparison** to image similarity methods, including CSD. Namely, in Appendix D.3, we compare the retrieval abilities of CLIP, CSD, and our method [by repurposing softmax-probabilities from DeepMatch's classifier given an image as that image's representation] in surfacing art by the same artist given a query artwork -- see our rebuttal to reviewer ZD9A and App. D.3 for details. Our method outperforms the two baselines, even though DeepMatch is not originally intended for retrieval. The key distinction is that our approach directly seeks to find features that distinguish styles between *individual* artists, where as CSD contrasts broader artistic styles (not specific to each artist; e.g. CSD is optimized to distinguish impressionism from cubism, but not necessarily Monet's impressionism from Manet's impressionism). For questions around copyright of unique artistic styles, we believe it is necessary for us to (a) take on this greater level of distinction / specificity in characterizing individual artistic styles (b) prioritize making the outputs of our work readily accessible to a non-technical audience that may comprise the human decision makers (judges, juries, etc) we aim to assist.\", \"other_key_notes_about_csd\": [\"It **can be easily be incorporated into our framework** as the backbone instead of CLIP. This is because the focus of our work is in a sense broader than that of CSD. We center the legal questions around copyright of individual styles, and design our framework around principles of ease-of-understanding/use and accessibility to a non-technical audience (via our 'recognizability' logical argument and interpretable components like TagMatch). CSD is an excellent contribution, and the resultant style fine-tuned backbone can be used in place of CLIP in our framework, so to leverage their improved embeddings while still delivering assessments of style copying in an easy to digest manner. We'll add evidence of how CSD and our method can be integrated in a later comment.\", \"**Their training (including contrastive and self-supervised losses to fine-tune a large model) is far more intensive than ours**, which requires just a simple classification loss on a small number of trainable parameters. Thus, **while WikiArt was small for their purposes, it is suitably sized for ours**.\", \"Lastly, we **are in the process of modifying the main text per your suggestions**, including changing wordings to specify our goal, underscore the nuance of this problem, highlight the legal context (with explicit references to our new appendix section).\"]}", "{\"title\": \"Confirmed that biases are not present\", \"comment\": \"Thank you for your kind words (especially on the topic of our legal discussions and the accessibility of our approach to broader audiences) and valuable feedback! We answer questions below. Sorry for the delay.\\n\\n**Checking for potential biases**. We appreciate the reviewer\\u2019s attention to this important issue. To check for biases more closely, we collect two new forms of metadata: we obtain artist **nationality** from WikiArt when available, and we proxy \\u2018**popularity**\\u2019 by counting the number of visits to each artist\\u2019s wikipedia page using PageMetrics (inspired by [1] who also used web traffic to proxy popularity). Then, we inspect performance of DeepMatch and TagMatch on real held-out art over different continents and popularity levels. Both are stable: for regions, DeepMatch average accuracies and TagMatch accuracies fall within a 6% and 3% range respectively. For popularity, over four equally sized quartiles, DeepMatch average accuracies and TagMatch accuracies fall within a 3% and 7% range. Since we have an accuracy per artist for DeepMatch (whereas TagMatch either matches or does not match the artist), we can also inspect the correlation between our popularity measure and the DeepMatch confidence per artist: we find there to be virtually none (Pearson r of 0.04). \\n\\n**Diversity of dataset**. With our new nationality data, along with previously collected metadata, we were able to see that our dataset has artists from all 6 continents (aside from Antarctica) and at least 40 nations, while also spanning 81 distinct styles (according to WikiArt\\u2019s categorizations). Importantly, our code (to be released) allows users to easily expand our dataset to artists with less than 100 artworks on wikiart \\u2013 this threshold is somewhat arbitrary, and related more to our goal of measuring copying by generative models of \\u2018prolific\\u2019 artists. \\n\\nAs for artists that fall under styles (based on WikiArt) that are more niche, we observe them to actually be recognized at higher rates, with a small to moderate positive correlation of r=0.4 between DeepMatch accuracy on held out art for an artist and the number of other artists who fall under the same style category as that artist. The intuition here is that if only one artist practices some style, their work will be more recognizable than an artist that practices a style that is very common, like Italian renaissance art (see Figs. 4 and 10). \\n\\n**TagMatch\\u2019s components (vocabulary, tagger) are flexible**. We note that TagMatch is sufficiently modular to where an improved tagger can easily be swapped in. More readily, one can modify the underlying vocabulary to incorporate more descriptors relevant to some broader style of interest. Thus, the reviewer is correct in that TagMatch can easily be improved as underlying technology evolves. \\n\\n[1] Sun et al, \\\"Head-to-Tail: How Knowledgeable are Large Language Models (LLMs)? A.K.A. Will LLMs Replace Knowledge Graphs?\\\", NAACL 2024\"}", "{\"title\": \"Summary of updates\", \"comment\": \"First, thank you to all reviewers for their time and engagement -- it is privilege to get your continued feedback.\", \"we_would_like_to_briefly_summarize_the_main_updates_to_the_paper_made_during_the_discussion_period_thus_far\": [\"**New extended discussion of legal context**: We have added App. B to serve as a crash-course / starting point for readers to understand the legal context of our work, going over centuries history and over a dozen cases.\", \"**New experiment directly comparing image-similarity methods to our approach**: We find that DeepMatch's softmax probabilities are surprisingly strong when used as image representations for the retrieval task of getting art from the same artist as a query image. We directly compare to (and beat) embeddings from CLIP and a recent style fine-tuned CLIP. We also stress that our work tackles a distinct (though related) problem to style similarity -- we are more concerned with finding and articulating *unique* styles, as we find this critical to the copyright discussion. Details in App D.3.\", \"**New human study validating our style copying judgments**: We design and carry out a user study to assess if human judgments match the outputs of our tool when determining if generated art infringes on an artist's unique style (App. H). The human study corroborates our automatic judgments, and sheds insight on how style similarity differs from our task of (unique) artistic style infringement.\", \"Other small highlights: like **demonstrating that style fine-tuned embeddings can be swapped into our method easily and with little change to results**, and **quantitative confirmation that our tool is not biased to popularity or geography**.\", \"We look forward to the rest of the discussion period and would be more than happy to clear up any more concerns.\"]}", "{\"title\": \"On contextualizing with respect to prior work\", \"comment\": \"In view of the clarified goals of the paper, one of the most important prior work is Somepalli 2024, which is the current state-of-the-art in creating an empirically grounded artistic style metric.\", \"a_couple_points\": [\"The citation for Somepalli 2024 should be corrected. Although recent, it was presented at ECCV 2024 (not just a preprint).\", \"The contributions, approach, and relevance of that work should be expanded a bit more in the text, and compared and contrasted with the approach taken in the current paper.\", \"Ideally, the method should be compared directly with Somppalli 2024's CSD method.\", \"In particular, Somepalli 2024 asserts that datasets \\\"like WikiArt are not large enough to train a good style feature extractor\\\" - but the WikiArt approach is exactly what the current paper is attempting. The merits of this assertion should be discussed a bit.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you very much for your time and valuable feedback! We answer questions below.\\n\\n**Performance for lesser known artists**. As detailed in our response to reviewer gWF2, we find very little difference in performance for TagMatch over artists grouped by popularity (measured based on web traffic to their Wikipedia page); over four quartiles, TagMatch accuracies fall within a 7% range. While there is little to no gap based on popularity of the artist, we nonetheless note that TagMatch is flexible in that both its vocabulary and tagger can be swapped out with analogs that better cover more niche styles as needed, giving the user greater ability to engage with and modify TagMatch as needed. \\n\\n**Spurious correlations**. This is a great point and further emphasizes the need for an interpretable component of ArtSavant. Importantly, **it is rather non-trivial to decide what constitutes a \\u2018spurious\\u2019 feature in this case, as any sort of pattern can arguably be seen as a stylistic motif**. With the interpretability of our approach, ideally humans can decide for themselves if articulated stylistic elements are valid or spurious. Having qualitatively inspected many cases, including in a recent human study for reviewer yjz4, we can say that there have not been any obvious spurious features being relied upon. Nonetheless, one could leverage input-attribution techniques to better understand how DeepMatch makes predictions. \\n\\n**Classifier vs. embedding based approach**. We have added a number of explorations of using/comparing to style-finetuned embeddings. In short, (i) these embeddings can serve as a drop in replacement to the CLIP embeddings in our framework, without much of a change in outputs (ii) in a retrieval task (retrieving art from the same artist given a query image), softmax probabilities from our classifier turn out to be more effective than both vanilla and style-finetuned embeddings, likely because *our classifier explicitly upweights information related to distinguishing artists (relevant to our objective of understanding when and why artistic styles are \\u2018unique\\u2019), while other embeddings capture much more general information about the image* (this is the key pro / con or distinction between the two approaches, in addition to the enhanced interpretability of our approach). \\n\\n**Retraining capacity**. An earlier iteration of our dataset contained about 10 more artists, and we saw accuracy of our classifier then was only slightly less (~1pp less) due to the higher total count. Thus, for our proposed usage of just adding art for the one artist at a time, we believe there is minimal risk of forgetting. \\n\\n**Answers to questions**. \\n- Thanks for pointing out the typo - it is now fixed. \\n- The lowest number of works per artist in our training set is 80, and even at this count, there is an artist whose work is recognized with 82% accuracy. \\n- DeepMatch is more accurate, so TagMatch sometimes confuses stylistically similar artists when DeepMatch is correct, though generally, the predicted artist using DeepMatch is within the top 5 predicted artists using TagMatch. We can still inspect matched tags in these cases, allowing us to make use of TagMatch\\u2019s inherent interpretability and attributions.\"}", "{\"title\": \"New human evaluation and study of adapting CSD as the basis for the classifier\", \"comment\": \"We now present **two more new analyses** (in addition to the last retrieval experiment directly comparing CLIP, CSD, and DeepMatch), directly following your suggestions.\\n\\n**Human validation of style copying**. We design and conduct a human study to assess style copying, toward verifying the outputs of our system. Given a \\u2018plaintiff\\u2019 artist A, we present a set of artworks generated by Stable Diffusion v1.4 in the style of the plaintiff A, along with two sets of real artworks: one from the plaintiff A, and another from a very similar artist B (selected as the artist \\u2013 aside from A \\u2013 that DeepMatch predicts to have created the highest number of the artworks generated 'in the style of A'). A human then denotes if the set of generated art more closely resembles the style of the set from artist A or artist B, with the choice to abstain if neither set is more similar to the generated art than the other. Abstention may occur if the generated art is different from both sets or equally similar to both sets. Each set contains 16 works. The two sets of real art are presented in random order so that the human does not know which belongs to the plaintiff artist.\", \"we_run_this_experiment_for_40_artists\": \"20 which are flagged by our method as being at risk of style copying, and 20 which are not \\u2013 we call the first group COPIED and the second group SAFE. We find that **the percent of artists where the generated art is marked as more similar to the plaintiff is 90% for the COPIED group and just 5% for the SAFE group**. Notably, the human abstained in 19 of the 20 artists in the SAFE group: for 15 artists, the generated art was dissimilar to both sets of real art, while for 4 artists, the generated art was equally similar to both sets. The latter type of abstentions underscores an important point: **generated art can be similar to a plaintiff artist without necessarily meeting a bar for infringement, since infringement would require that a *unique* style is copied. This point relates to the discussion of CSD: while an improved style similarity metric can help surface relevant art, on its own, it does not immediately lead to an understanding of if an artist\\u2019s style is unique, and what elements comprise that style. These questions are critical when it comes to copyright protection for artistic styles, and we center answering them in our work.**\\n\\nIn summary, **our new human study corroborates the outputs of our method**. Artists that are flagged to be at risk of style copying have generated artwork that is more similar to their work than even art from the most similar other artist. However, artists who are not flagged by our method have generated artwork that is either dissimilar to their style, or no more similar to their style than to that of another artist. \\n\\n**Adopting CSD as the basis of our classifier**. We swap CLIP with CSD as the backbone of DeepMatch\\u2019s classifier, adding new results in App. D.1. Notably, the recognizability \\u2013 rate at which art is recognized as belonging to an artist \\u2013 over artists is highly correlated between the two choices of backbone, both for real held-out art (Pearson r=0.82) and generated art (Pearson r=0.83). This underscores that our work and CSD have different (but complementary) underlying goals, so CSD can easily be integrated to our framework. Namely, CSD aims to obtain an improved style similarity metric, while we focus on finding *unique* artistic styles. Still, in our prior new rebuttal experiment on image retrieval [more related to CSD's goal], we find that the softmax probabilities of DeepMatch serve as a strong image embedding for the task of retrieving works by the same artist of a query image, beating both CLIP and CSD for WikiArt (see new App D.3), suggesting DeepMatch may have utility in measuring style similarity on the level of image-pairs, though again, our goal is distinct from that task.\\n\\n**Updates to writing**: We have made changes to the text to highlight new studies, including the human validation, detailed legal history section, and CSD comparison you suggest, along with clarifications of the purpose of our work (including but not limited to explicitly stating that we do not wish to replace humans in the intro, sec 2, and conclusion).\\n\\n**Other minor clarifications**: \\n1. Our method flags roughly 20% of artists to be at risk of style copying (which amounts to ~75 artists out of 372, not 16 like you mention). \\n2. We study a very simple prompting setting, where each prompt consists of just a painting name with the suffix \\u201cby {artist}\\u201d. A more involved prompting strategy could lead to greater degrees of style copying.\"}", "{\"summary\": \"The paper introduces ArtSavant, a tool designed to assess artistic style copying in text-to-image generative models. Built on legal scholarship, ArtSavant combines two methods, DeepMatch (a neural classifier) and TagMatch (an interpretable tag-based approach), to detect unique artistic styles and assess whether they are replicated in generated images. An empirical study using ArtSavant indicates that around 20% of the artists in their dataset appear at risk of style copying by generative models, raising concerns for the need to protect artistic styles under copyright.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper addresses a timely issue -- potential copyright infringements in text-to-image generation -- that bridges technical, legal, and ethical domains.\\n2. ArtSavant\\u2019s combination of DeepMatch and TagMatch represents a thoughtful approach, with one method offering high accuracy and the other interpretability. This approach is likely beneficial for non-technical audiences, such as legal professionals and artists.\\n3. The paper is well-grounded in legal discussions, positioning ArtSavant as a tool that can potentially support legal decision-making regarding style infringement.\", \"weaknesses\": \"1. The use of a limited reference dataset (372 artists) could affect the generalizability of ArtSavant\\u2019s findings, especially for artists with less established styles. Expanding the dataset to include more diverse artistic styles could strengthen the conclusions.\\n2. ArtSavant may struggle with assessing artists whose work doesn\\u2019t conform to traditional or well-known styles, limiting its broader applicability. It may inadvertently favor more mainstream artistic elements, possibly overlooking style copying for non-Western, niche, or experimental art styles.\\n3. Although TagMatch aims to make the tool interpretable, the subjectivity inherent in artistic tagging could affect its reliability, especially in legal contexts. This may be partially addressed by improving tagging accuracy, as noted by the authors.\", \"questions\": \"1. How does ArtSavant perform when applied to more obscure or emerging artists whose styles may be less distinctive or well-known?\\n2. The TagMatch method relies on zero-shot tagging with CLIP, which may not capture subtleties in artistic style. Have the authors considered evaluating the reliability of TagMatch across different art genres or complex styles, and could a more refined tagging approach improve interpretability and consistency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"New appendix section to provide greater legal context\", \"comment\": \"Thank you for your valuable feedback. We first address your concern the level of *legal discussion*.\\n\\nWe completely agree that legal context is very important, as ICLR\\u2019s audience may have less exposure to relevant legal discourse. **To provide greater legal context in a more pedagogical manner, we add a new appendix section (B; after limitations)** that reviews the history of copyright (from concerns with printing presses in 1710 to chromolithography in 1870 to motion pictures in 1912 and sound recordings in 1971), the current legal landscape, and how our work may fit into it. Further, we clarify that we do not seek to \\u2018advance a new legal theory\\u2019 nor to simplify (via automation) the nuanced issue of copyright. We highlight the following paragraph from our new appendix section:\\n> Our purpose in this paper is not to take a position on the issue of whether copyright law should be extended to protect individual artistic styles... *Neither is our purpose to automate the analysis of copyright infringement*. Rather, we are interested in investigating whether there is any scientific support for the idea that there are identifiable individual artistic styles, and that those styles could be correlated with a group of human-understandable stylistic terms. \\u2026when we try to train a model that can correctly identify authors of previously unseen works, we may get closer to understanding whether and when individual artistic styles exist. If we can link that classification to a selection of terms describing characteristics of those works, then we can explain to human beings what the components of such individual artistic styles might be. *If some kind of protection of individual style ever became part of copyright law, judges or juries would still have to decide whether a particular output of a generative AI tool too closely mimicked the individual style of an artist. The degree to which an AI model could or could not correctly identify the author of a work, and the stylistic terms that that model could or could not correlate with that classification, would simply provide additional information to those human decision makers.*\\n\\nIn other words, **our tool aims to assist human decision making, not replace it**. We identify how deep learning techniques can be used to quantitatively study relevant questions to the copyright debate, like if unique styles exist, how they can be articulated, and how much they reappear in AI-generated work. \\n\\nWe specifically designed our methods so that jurors can engage with their outputs, even as our tool processes tens of thousands of artworks. Namely, **our tool can aid the \\u2018intrinsic\\u2019 decisions by**:\\n- *Surfacing the most similar artists (and their works) to a plaintiff artist*, by seeing whose work is most frequently misclassified to the plaintiff or vice versa. The humans can then decide if the plaintiff\\u2019s style differs from the most similar artists sufficiently to have a 'unique' style. We explore this Appendix C.1 for instances where artists whose styles do not meet our threshold for uniqueness.\\n- *Presenting side-by-side sets of art that match a tag signature, with the elements comprising that style articulated* via TagMatch\\u2019s interpretability and data attribution. The humans can compare the similarity of the two sets (do they both match the listed tags?) and compare these sets to artwork matching a subset of the tags (does adding one more atomic tag actually make the style more unique than other existing work?)\\n- Engaging questions like \\u2018unprecedented similarity\\u2019, where *humans can inspect a plaintiff\\u2019s work, generated art, and art from the most similar existing artist*. Then, humans can decide if the generated art meets an unprecedented level of similarity to the plaintiff\\u2019s style, exceeding the similarity seen by any other human artist. We explore this deeper in Appendix C.2.\\nOur tool uses the scalability of machines to gain insight and retrieve the most relevant artworks from a vast corpus, so that human decision makers can be more informed in their judgments. Also, we present these insights in an easy-to-digest manner: instead of saying Artist A and generated art A\\u2019 have a similarity of X (via some black-box metric), our tool says generated art A\\u2019 is confused as having been authored by artist A (over hundreds of other artists) Y% of the time, while also surfacing examples and articulating stylistic terms that contribute to the similarity.\", \"intent_of_the_ai_user\": \"we exclude this from our framework, as determining a user\\u2019s intent may be infeasible \\u2013 we instead focus on questions with greater potential for quantitative approaches. We mention the Adobe work to show the emerging importance of artistic style copyright to legal and technical stakeholders alike.\\n\\nWe'd be happy to discuss this topic further, as we agree that it is very important. In fact, a legal expert has been an integral part of our team from the start, so to appropriately ground our work in existing legal scholarship.\"}", "{\"title\": \"Main paper should clarify\", \"comment\": \"Thanks to the authors for the addition of Appendix B. This discussion is much clearer than the framing in the main text!\\n\\nThe appendix now clarifies (1) the articulation of the goal of \\\"investigating whether there is any scientific support for the idea that there are identifiable individual artistic styles,\\\" and the (2) explicit clarification that \\\"neither is our purpose to automate the analysis of copyright infringement.\\\"\\n\\nGiven the clarified goals of the paper in the appendix, I suggest that the authors consider revising the paper to clarify (and pursue) this these points in the main text rather than just the appendix.\", \"specifically\": \"On lines ~87 the argument that the \\\"existence and uniqueness of artistic styles can be reduced to classification\\\" sticks out as an undefended assertion (empirically). This equivalence of style to artist classification is not obvious to a computer vision expert - classifiers can have many inductive biases different from style - and this equivalence it is not investigated empirically in this paper, even though the paper asserts that the system is useful for understanding style.\", \"for_example\": \"an artist classifier might achieve high accuracy by looking for an artist's signature in the corner of the image (or some other confounding feature) rather than making a judgment based on the holistic style. In light of the clarified goals, the lack of any check of this assertion is one of the paper's main weaknesses, and should be fixed before publication.\", \"the_problem_could_be_addressed_if_the_authors_were_able_to_do_two_things\": \"(1) Change the wording around line 87 to clarify that, although legal arguments would suggest that styles can be reduced to artist identification, one of your goals in this paper is to, as the authors say, \\\"investigate whether there is any scientific support\\\" for this view, i.e.,g to empirically test whether an artist classifier can successfully identify an artists' style.\\n\\n(2) Then such an investigation should be done, i.e., conduct an evaluation of your classifier(s) to compare whether their classification judgments match what people perceive as \\\"style\\\" in an image. I can see a few ways to do it. One way to do this would be to conduct a human evaluation. Another way is to compare your classifier to the prior state-of-the-art in style identification, the CSD method from Somepalli 2024, i.e., to see if the judgements of your classifier would match their style metric. The Somepalli work is open-source and should be able to be done with an automatic evaluation. Alternately, you could adopt the Somepalli method as the basis for your classifier, and then you'd be able to argue that the empirical evidence they collected would support your point.\\n\\nLook forward to the authors' perspectives on the suggestions above.\"}" ] }
0OB3RVmTXE
Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models
[ "Vinith Menon Suriyakumar", "Rohan Alur", "Ayush Sekhari", "Manish Raghavan", "Ashia C. Wilson" ]
Text-to-image diffusion models rely on massive, web-scale datasets. Training them from scratch is computationally expensive, and as a result, developers often prefer to make incremental updates to existing models. These updates often compose fine-tuning steps (to learn new concepts or improve model performance) with “unlearning” steps (to “forget” existing concepts, such as copyrighted data or the ability to generate explicit content). In this work, we demonstrate a critical and previously unknown vulnerability that arises in this paradigm: even under benign, non-adversarial conditions, fine-tuning a text-to-image diffusion model on seemingly unrelated images can cause it to “relearn” concepts that were previously “unlearned.” We comprehensively investigate the causes and scope of this phenomenon, which we term concept resurgence, by performing a series of experiments based on fine-tuning Stable Diffusion v1.4 alongside “mass concept erasure”, the current state of the art for unlearning in text-to-image diffusion models (Lu et al., 2024). Our findings underscore the fragility of composing incremental model updates, and raise new serious concerns about current approaches to ensuring the safety and alignment of text-to-image diffusion models.
[ "machine unlearning", "concept unlearning", "evaluation", "diffusion models", "text to image" ]
https://openreview.net/pdf?id=0OB3RVmTXE
https://openreview.net/forum?id=0OB3RVmTXE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "thdb4kNGxz", "VpXd5l8YE5", "BjJgpshw8x", "Bi1oXU64fU", "6qihcl0W1I", "0upDQD9vrB" ], "note_type": [ "official_review", "comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1730712873791, 1732200243629, 1730724680778, 1732200214458, 1730113798820, 1730816494978 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12167/Reviewer_vhR8" ], [ "ICLR.cc/2025/Conference/Submission12167/Authors" ], [ "ICLR.cc/2025/Conference/Submission12167/Reviewer_G92E" ], [ "ICLR.cc/2025/Conference/Submission12167/Authors" ], [ "ICLR.cc/2025/Conference/Submission12167/Reviewer_D6Um" ], [ "ICLR.cc/2025/Conference/Submission12167/Reviewer_CbHp" ] ], "structured_content_str": [ "{\"summary\": \"This paper reports an interesting behavior of unlearned diffusion models, called concept resurgence \\u2013 when a concept is unlearned from a diffusion model, this concept is observable again after fine-tuning. The cause of this phenomenon is analyzed in two ways: algorithmic factors and data-dependent factors. In short, concept resurgence occurs when unlearned model parameters are close to the parameters of a pre-trained model and when fine-tuning data is correlated to training sets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper introduces a very interesting phenomenon of unlearned models \\u2013 concept resurgence. To my understanding, this observation hasn\\u2019t been discussed in the unlearning domain.\", \"weaknesses\": [\"The supporting experiments are slightly below the ICLR standard. Overall, this paper should justify their claim via experiments but the supporting experiments are weak/handful.\", \"The interesting phenomenon is only evaluated on one unlearning model (i.e., MACE). Additional unlearning methods are needs to be evaluated on hopefully five different methods, e.g., Selective Amnesia (https://arxiv.org/abs/2305.10120), SALIENCY UNLEARNING (https://arxiv.org/abs/2310.12508), and more.\", \"For out-of-domain concepts, it would be useful to add some visualization on the correlation between a target concept and out-of-domain concepts.\"], \"questions\": [\"Can you apply additional unlearning methods (hopefully five different methods) to show the same concept resurgence phenomenon?\", \"Can you visualize a target concept and out-of-domain concepts to show some (semantic) distance between them?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their thoughtful and constructive comments. Unfortunately, we cannot fully incorporate their feedback during the rebuttal period, and have thus opted to withdraw our work from ICLR. We look forward to strengthening our manuscript and resubmitting at a later date.\"}", "{\"summary\": \"This paper focuses on the concept of \\u201cconcept resurgence\\u201d in text-to-image diffusion models. These models are often updated incrementally through fine-tuning and unlearning steps. The authors demonstrate that fine-tuning a diffusion model can cause previously \\u201cunlearned\\u201d concepts to reappear, even when fine-tuning on seemingly unrelated data. They conduct experiments using Stable Diffusion v1.4 and the Mass Concept Erasure (MACE) technique. The study investigates factors contributing to concept resurgence, including algorithmic choices (such as mapping concepts, regularization, and fine-tuning algorithms) and data-dependent factors (like CLIP distance and out-of-domain concepts). The findings highlight the fragility of current model update paradigms and raise concerns about ensuring the safety and alignment of diffusion models.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors identify a previously unknown vulnerability (concept resurgence) in diffusion models, which is important for understanding the limitations of current model update strategies.\", \"This paper systematically examines both algorithmic and data-dependent factors contributing to concept resurgence, providing a detailed understanding of the phenomenon.\", \"The research has direct implications for the development and safety of diffusion models, as it highlights the need to address concept resurgence to ensure reliable and safe model performance.\"], \"weaknesses\": [\"While this paper focuses mainly on MACE as the unlearning algorithm, it remains unclear whether the observed results could be fully generalizable to other unlearning techniques, which can be considered to add for more comprehensive analysis.\", \"Since we cannot enumerate all possible concepts during evaluation, could the authors provide some insights on the metrics that we can use to measure the difficulty of the resurgence of a certain concept? This might help to reach a more general conclusion of the experiments.\", \"Aside from the two examined celebrity and object erasure tasks and specific benchmarks, it would be better to extend the evaluation on more diverse settings to see if the findings still hold.\", \"Minor: Though it might be out of the scope of this manuscript, it is very interesting to have some theoretical analysis regarding the observations.\"], \"questions\": \"Please kindly refer to the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewers for their thoughtful and constructive comments. Unfortunately, we cannot fully incorporate their feedback during the rebuttal period, and have thus opted to withdraw our work from ICLR. We look forward to strengthening our manuscript and resubmitting at a later date.\"}", "{\"summary\": \"The paper investigates the phenomenon of concept resurgence in text-to-image diffusion models that have been fine-tuned to forget certain concepts. The authors show that after erasing certain concepts with MACE, fine-tuning on unrelated concepts can reintroduce the erased concepts. The authors carry out experiments where several parameters of the erasing/fine-tuning are varied to elucidate the various factors that contribute to concept resurgence.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This idea of concept resurgence is very interesting and pertinent to the safety/concept unlearning community in text-to-image models. To my knowledge this is the first work to identify such an issue.\", \"The paper is well-written and ideas are clearly communicated.\"], \"weaknesses\": [\"The experiments in the paper are only on models erased with MACE, although numerous SD erasure works [1,2] have been proposed. Without experiments on a few more baselines, logically speaking the evidence from the paper only supports the claim that concept resurgence occurs on models erased with MACE rather than in general, which would weaken its impact.\", \"Sec 4.3 seems to contradict the hypothesis that concept resurgence is more prominent if the weights from erasure were not moved far from the original weights, since I would assume LoRA makes smaller weight changes than full fine-tuning, yet the effects on resurgence are similar. Could the authors make this more quantitative and measure the deviation of the weights from the original values, for e.g., in the L2 sense?\", \"Have the authors tested resurgence on truly more 'abstract' concepts like nudity or violence? The current experiments focus on relatively 'easier' concepts that can be defined by a single or few synonyms, like the name of the celebrity or object. Concepts like nudity can be expressed by numerous synonyms and even abstractly by the names of artists who paint with nude styles, for example.\", \"Overall I found that the technical contribution of the paper to be somewhat lacking by ICLR's standards, even though the phenomena presented is novel. The experiments are focused on one baseline and two concept types (celebrities and objects). As the authors acknowledge in the limitations, the paper lacks theoretical insights into concept resurgence or any mitigation strategies.\"], \"minor_points\": \"- consider moving Eq 1 to the front of the paper and introduce MACE more thoroughly given that the experiments in the paper are focused on MACE.\\n- some missing references on early works in the area of erasure/safety in text-to-image models [1,2,3,4].\\n\\n[1] Zhang, Eric, et al. \\\"Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models. ArXiv abs/2303.17591 (2023).\\\" (2023).\\n\\n[2] Gandikota, Rohit, et al. \\\"Erasing concepts from diffusion models.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[3] Heng, Alvin, and Harold Soh. \\\"Selective amnesia: A continual learning approach to forgetting in deep generative models.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[4] Schramowski, Patrick, et al. \\\"Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\", \"questions\": [\"Can the authors provide details on what the 'others' and 'synonyms' are in the different figures?\", \"Could the authors provide more experiment details, for e.g., what was the fine-tuning procedure to induce concept resurgence? Information like hyperpameters to reproduce the experiments are missing\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper examines a significant vulnerability in text-to-image diffusion models regarding the unlearning of unwanted concepts, termed \\\"concept resurgence.\\\" It demonstrates that fine-tuning diffusion models on seemingly unrelated and benign data can inadvertently lead to the re-emergence of previously erased concepts. This vulnerability raises serious concerns about the reliability of current unlearning methods, particularly for developers aiming to protect users from undesirable content. The authors conducted experiments using Stable Diffusion v1.4 and the Mass Concept Erasure (MACE) technique, revealing that concept resurgence can occur even under benign conditions. Further, the authors explore and try to identify various factors which may contribute to this issue such as the choice of fine-tuning data and the regularization applied during unlearning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper tackles a timely and practically-relevant problem supported by a fair amount of experiments. Model unlearning regarding AI safety is an area with limited prior research, making this work particularly valuable.\", \"This work stands as a pioneering study in attempting to identify concept resurgence phenomenon regarding text-to-image diffusion models.\"], \"weaknesses\": [\"The main weakness of this paper is its limited experimental scope. While the paper's key contribution is the concept resurgence phenomenon, it is supported only by limited empirical evidence. This calls for testing the phenomenon in various setups, yet the authors only use a single model, SD v1.4. Given the availability of advanced models such as SDXL, EDM, MDT, and FLUX, it would be helpful to see experiments using other diffusion models, particularly those trained with flow matching objectives instead of score matching losses. Additionally, the authors exclude tasks related to artistic style removal and explicit content removal, citing evaluation challenges. However, it would still be valuable to demonstrate the concept resurgence phenomenon in these tasks, even if a fair evaluation is difficult. The current experimental setup is also limited in terms of dataset diversity. Providing additional qualitative examples beyond Figures 2 and 4 would strengthen the paper.\", \"To my understanding, this paper only experimented with a single unlearning technique, MACE. The authors need to explore more existing methods such as UCE, FMN, ESD, SDD etc. Even if MACE is a SOTA unlearning method, concept resurgence may not appear with the other baselines. Section 4.2, in particular, would benefit from a broader discussion of baseline methods.\", \"The authors propose three potential contributors to concept resurgence: mapping concept, regularization, and fine-tuning algorithms. However, the discussion in Section 4 lacks depth. The authors should offer theoretical justifications or at least propose a main hypothesis supported by empirical evidence. For example, in Figure 7, they suggest that \\u201cincreasing regularization increases concept resurgence in the celebrity erasure task, but has little impact on the object erasure task.\\u201d It would be helpful to identify the key factor causing this difference and explore how this factor might be used to prevent concept resurgence. Further, the authors conclude that the difference between full fine-tuning and LoRA fine-tuning does not affect concept resurgence. However, if sufficiently \\u201cdistant\\u201d fine-tuning can prevent concept resurgence, wouldn\\u2019t full fine-tuning be more effective than LoRA in doing so?\"], \"questions\": [\"What exactly is meant by \\u201cmapping concept\\u201d? I read the paper carefully but still find the term\\u2019s exact definition unclear. Did the authors use this term in the same way as in the MACE paper?\", \"Regarding Figure 5, what would happen if 10 or 100 objects were removed, as in the celebrity erasure task?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0NvSMb7xgC
Auditing Predictive Models for Intersectional Biases
[ "Kate Boxer", "Edward McFowland III", "Daniel B. Neill" ]
Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we propose Conditional Bias Scan (CBS), an auditing framework for detecting intersectional biases in classification models. CBS identifies the subgroup with the most significant bias against the protected class, compared to the equivalent subgroup in the non-protected class, and can incorporate multiple commonly used fairness definitions for both probabilistic and binarized predictions. We show that this methodology can detect subgroup biases in the COMPAS pre-trial risk assessment tool and in German Credit Data, and has higher bias detection power compared to similar methods that audit for subgroup fairness.
[ "predictive bias detection", "fairness auditing", "intersectional bias", "contextual bias", "group fairness definitions", "subgroup bias", "predictive bias" ]
https://openreview.net/pdf?id=0NvSMb7xgC
https://openreview.net/forum?id=0NvSMb7xgC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "WtzymOfx1M", "Jx9I22T6wj", "ECYPXCEe6S", "4qMDnPTzyT", "1DfXEFPQYU" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730124435813, 1732570597904, 1731173321342, 1730493381671, 1729623680298 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7474/Reviewer_mxzy" ], [ "ICLR.cc/2025/Conference/Submission7474/Authors" ], [ "ICLR.cc/2025/Conference/Submission7474/Reviewer_yyee" ], [ "ICLR.cc/2025/Conference/Submission7474/Reviewer_wtcB" ], [ "ICLR.cc/2025/Conference/Submission7474/Reviewer_rwk3" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a novel approach to auditing and detecting fairness biases in predictive models. The method, called Conditional Bias Scan (CBS), allows for identifying the subgroup with the most significant bias and comparing it with the equivalent subgroup in the non-protected class. Empirical evaluation suggests the effectiveness of the approach.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The main strengths of the paper are:\\n\\nS1) the motivation of the methodology is relevant, as identifying intersectional biases (in a tractable manner) is an open issue in the fairness literature;\\n\\nS2) the empirical evaluation supports the effectiveness of the method; \\n\\nS3) the algorithmic procedure for detecting the most significant subgroup seems novel.\", \"weaknesses\": \"The main shortcomings of the current version of the paper are:\\n\\n\\nW1) I think a few arguments should be taken into account and need further clarification:\\n* in [Ruggieri et al., 2023], the authors show that algorithmic fairness objectives are not compositional, i.e., even if the classifier is fair on some of the regions of the input space, due to the emergence of Yule\\u2019s effect, the overall system is not necessarily fair. This could hinder CBS's ability to evaluate the overall fairness of the system.\\n* in lines 183-184, the authors consider propensity score estimates for $Pr(A=1|X)$. This assumes (implicitly) that the protected group can be seen as a treatment variable, while this has been largely debated in the literature (see e.g., for gender [Hu and Kohler-Hausman, 2020]). A proper discussion of this aspect should be provided.\\n\\nW2) The overall presentation can be improved. For instance, I find the empirical evaluation in section 4 quite dense and difficult to follow. E.g., starting the whole section from lines 310-319 can help the reader better understand the purpose of the experimental evaluation and help describe the evaluation setup (e.g., datasets, baselines, hyperparameters and metrics). \\n\\nW3) The empirical evaluation can be improved. Currently, the evaluation is limited to the COMPAS and German Credit datasets, which are rather small scale. I would argue that testing CBS on larger-scale datasets such as $\\\\texttt{folktables}$ [Ding et al., 2023] and $\\\\texttt{WCLD}$ [Ash et al, 2023] would make the results more compelling. Moreover, I do think CBS can be exploited to audit different classifiers and their relative biases, even though such an experiment is not performed.\\n\\n\\n[Hu and Kohler-Hausman, 2019] - Hu, Lily, and Issa Kohler-Hausmann. \\\"What's sex got to do with machine learning?.\\\" In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 513-513. 2020.\\n\\n[Ruggieri et al., 2023] - Ruggieri, Salvatore, Jose M. Alvarez, Andrea Pugnana, Laura State and Franco Turini. \\\"Can we trust fair-AI?.\\\" In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 13, pp. 15421-15430. 2023.\\n\\n[Ding et al., 2021] - Ding, Frances, Moritz Hardt, John Miller, and Ludwig Schmidt. \\\"Retiring adult: New datasets for fair machine learning.\\\" Advances in neural information processing systems 34 (2021): 6478-6490.\\n\\n[Ash et al., 2023] - Ash, Elliott, Naman Goel, Nianyun Li, Claudia Marangon, and Peiyao Sun. \\\"WCLD: curated large dataset of criminal cases from Wisconsin circuit courts.\\\" Advances in Neural Information Processing Systems 36 (2023): 12626-12643.\", \"questions\": [\"I have a few questions for the authors.\", \"q1) Could the authors clarify my doubts regarding W1)? In particular, what are the authors' considerations regarding the emergence of the Yule effect?\", \"q2) From Figure 3, it seems to me that the final results are heavily affected by which is the protected class specified. E.g., if we specify the \\\"Black defendants\\\" as the starting protected class, the subgroup with the highest F score (on Separation Scan for Recommendations) is \\\"Black Male Defendants\\\", while if we set \\\"Male Defendants\\\" as the starting class, the subgroup with the highest F score is \\\"Male Asian and Hispanic Defendants\\\". Even if I see why this occurs (in the first case, we refer to non-Black as the reference group, and in the second case, we refer to Females as the reference group), I would argue that this affects the ability of CBS to assess the intersectional biases occurring on a dataset. Can you comment on my observation further?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This study develops a new statistical test for identifying bias in prediction models across four different axes based on both the probabilistic outputs and the binarized classifications. This test builds upon likelihood ratio tests developed in the spatial and subset scan statistics literature. At it's core this test is examining when the quantity of interest deviates significantly from its expectation across multiple intersectional subgroups. The test is then evaluated on a semi synthetic dataset based on COMPAS and then COMPAS itself. Demonstrating its ability to identify bias and the most significantly impacted groups.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The method proposed is relatively simple to implement and rigorously grounded in the hypothesis testing literature\", \"The method is flexible for the commonly discussed fairness metrics\", \"The synthetic experiments are designed well to demonstrate the efficacy of the method in different scenarios and metrics\"], \"weaknesses\": [\"The methods that are compared against seem to be quite old and I would be interested to see how they compare to newer methods in the literature (e.g. [1])\", \"More real world dataset studies would improve the study (e.g. folktables [2])\", \"The writing is verbose at times and could benefit from being more concise. This is especially true in Section 3 when describing the methods.\", \"[1] Cherian, John J., and Emmanuel J. Cand\\u00e8s. \\\"Statistical inference for fairness auditing.\\\" Journal of Machine Learning Research 25.149 (2024): 1-49.\", \"[2] https://github.com/socialfoundations/folktables\"], \"questions\": \"1. How difficult does it appear to be to extend this framework to a multi-label setting?\\n2. How computationally expensive does this method get when scaling the number of groups to be evaluated over? As it stands it appears that the maximum number of intersectional groups is 4 in these experiments?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors investigate the problem of intersectional bias in classification and develop a novel search method for identifying intersectional bias. The authors compare their method to other auditing methods on semi-synthetic data.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"**[Problem Importance]** The authors study an important problem.\", \"**[Practicality]** The proposed method can accommodate a large number of fairness definitions that prior works are not able to accommodate.\"], \"weaknesses\": [\"**[Clarity]** Several important aspects of the paper are not articulated clearly. In particular, I found it difficult to follow the authors\\u2019 experimental design, a few examples include:\", \"Section 4: What is a \\u201crow\\u201d? The authors refer to specific rows or \\u201crow $i$\\u201d without defining this term. Is the row a particular data point, or is it a row of the covariates?\", \"When defining the true log-odds in their semi-sythetic data the authors say, \\u201cWe use these weights to produce the true log-odds of a positive outcome $(Y_i = 1)$ for each row $i$ by a linear combination of the attribute values with these weights.\\u201d\", \"This statement is quite vague and does not rigorously outline how the log-odds are computed. The authors denote the true log-odds as $L_i^{\\\\text{true}}$; perhaps a definition could be given for this quantity.\", \"**[Narrative vs Experiments]** There is a strong disconnect between the authors' results and the discussion/motivation of the paper. For example, the authors spend substantial time going over different fairness metrics and discussing the applicability of their method to each metric. However, no experimental results are shown for any such metrics. The authors simply shift the predicted probabilities, or true probabilities, of some individuals by some value and measure whether their algorithm, or the baseline, can identify those individuals. I would have liked to see some results showing the accuracy of the authors' method as a function of subgroup unfairness under a particular metric.\", \"**[Synthetic Data]** Due to the way in which the synthetic data is constructed, I find it difficult to appreciate some of the authors\\u2019 results. In particular, the authors randomly select sensitive attributes among all attributes in the data and change the true labels to have a noisy linear relationship with the features. Both choices destroy the innate relationships between features, sensitive attributes, and true labels, which cause unfairness in the base datasets (e.g., COMPAS). Further, it is not clear to me why we need synthetic data in the first place. The authors are working with two datasets that are known to possess innate bias both at the group level and the subgroup level; this begs the question as to why we are not shown results comparing the authors' method to SotA methods on these datasets without any synthetic modifications.\", \"**[Simplistic Experiments]** In addition to the issues with synthetic data above, the authors only show experiments for two datasets and two classifier types.\", \"**[Evaluation Metric]** When comparing to the baseline, the authors measure the IOU of the predicted subgroups $S^*$ and the subgroups with injected bias $S_{\\\\text{bias}}$, given on line 371. Without knowing whether or not the subgroups in $S_{\\\\text{bias}}$ are disadvantaged (and to what degree), it is difficult to appreciate the use of this metric.\", \"**[Comparisons to Baselines]** When comparing their method to baselines, e.g., in Figure 1, the authors find that their method is only superior to baselines for relatively large amounts of bias. Moreover, in results such as Figure 2, it seems that the authors\\u2019 methods can achieve extremely poor accuracy depending on the type of bias present (negative or positive delta). Without apriori knowing the type of bias, it may be difficult to meaningfully apply these methods in practice. Lastly, the results in each of the aforementioned, and similar plots are difficult to interpret because we cannot understand how much bias a specific value of $\\\\delta$ or $\\\\mu$ corresponds to. It would be helpful to see the level of bias converted into actual fairness metrics.\"], \"questions\": \"Could the authors please address my concerns in the above section?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors address the challenge of auditing machine learning models for intersectional biases (fairness gerrymandering). They introduce a methodology called Conditional Bias Scan (CBS) for detecting biases that affect specific subgroups, which may arise from intersectional factors (membership in two or more protected classes) or contextual factors (decision situations). The CBS methodology involves four stages: (1) initializing the event variable I, protected class A, covariates X, and conditional variable C based on the input parameters and chosen fairness definition; (2) estimating the expected value of I under the null hypothesis; (3) using a multidimensional subset scan to identify subgroups that systematically deviate from the expected values computed in step (2) and selecting the most significant ones; and (4) assessing the statistical significance of the detected subgroups. The paper includes a comprehensive experimental evaluation to validate the approach.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper provides a framework for auditing intersectional biases, a crucial area often overlooked in fairness assessments (detection of gerrymandering).\\n2. The proposed method can accommodate different group fairness metrics and can effectively scan numerous subgroups.\", \"weaknesses\": \"1. The reliability of the estimation of expectations I under the null hypothesis depends on having well-specified models for estimating the propensity scores of the protected class.\\n2. The paper is quite dense and challenging to follow. It would benefit from providing more intuitive explanations or examples to illustrate why the overall method is effective in real-world scenarios. This would help readers better understand the practical implications and the rationale behind the approach.\", \"questions\": \"1. How does CBS handle scenarios with continuous covariates without discretization, and how does it address the potential loss of valuable information during this process?\\n2. What are the implications of using different models for estimating conditional expectations on the detection of biases? Beyond modeling the COMPAS data, is there additional evidence of real-world datasets to support the effectiveness of this method? See also weakness 2. \\n3. Could you clarify the distinction between auditing the COMPAS dataset and the model itself? The explanation in Section 5 is not entirely clear.\\n4. Regarding Figure 3: Does the framework detect the entire group of defendants under the age of 25? What is the role of the conditional variable C in the null hypothesis in this case?\\n5. Consider making the tables more consistent in terms of formatting and presentation for easier comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
0NEjIZlEhP
Verified Relative Output Margins for Neural Network Twins
[ "Anahita Baninajjar", "Kamran Hosseini", "Ahmed Rezine", "Amir Aminifar" ]
Given two neural network classifiers with the same input and output domains, our goal is to compare the two networks in relation to each other over an entire input region (e.g., within a vicinity of an input sample). Towards this, we introduce and quantify the Relative Output Margin (ROM) with which decisions are made. A larger output margin for a network w.r.t. another indicates that this network consistently makes a correct decision every time the other network does, and it does so in the entire input region. More importantly, as opposed to best-effort testing schemes, our framework is able to establish provably-correct (formally verified) bounds on ROM gains/losses over an entire input region. The proposed framework is relevant in the context of several application domains, e.g., for comparing a trained network and its corresponding compact (e.g., pruned, quantized, distilled) network. We evaluate our framework using the MNIST, CIFAR10, and two real-world medical datasets, to show its relevance.
[ "Relative Output Margin", "Formal Verification", "Deep Neural Networks" ]
Reject
https://openreview.net/pdf?id=0NEjIZlEhP
https://openreview.net/forum?id=0NEjIZlEhP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ui0x8oOPW7", "rSj3YMky2I", "q3xQfmKcci", "ptPhyAnaxC", "oaNxZX9Em6", "nX2keKXAbk", "lDalaE9EU9", "l7LWkTuOKQ", "jVchEkSLat", "iMvYCPi6L2", "i9DlmJLNWu", "h1MVkNwteF", "gGkrjPmMoU", "gEQ1rgAkSr", "avhu1IrB8Z", "a0DuxGaPCU", "NXqKjBDu4Q", "MLa957aBPm", "KddxxI2ckV", "FdR6lxz7uz", "Am2RCdYS8z", "7EKSTuRTkN", "60prFq0T1m", "54Mh86rQ7r", "3yYlWaI42B", "3sUm9CkYK2", "2Xs1WqCdrL", "26XgcyFflS" ], "note_type": [ "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1729190093356, 1730748587508, 1731418889423, 1732634528398, 1732446391223, 1732467771611, 1733128771052, 1732446551619, 1732873073544, 1732700564778, 1733004890890, 1734962504038, 1733025969175, 1737523998430, 1730388565652, 1732468397456, 1733082197930, 1740559297711, 1733144845817, 1732351092023, 1730671208209, 1733134076776, 1732709809381, 1732873309771, 1732962794213, 1733133752679, 1732345125564, 1732963051684 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_eQgt" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_tfBm" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_mznd" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_eQgt" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_eQgt" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_c8iL" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Area_Chair_dctA" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_eQgt" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_c8iL" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_dty1" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "~Anahita_Baninajjar1" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_dty1" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_tfBm" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Reviewer_tfBm" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ], [ "ICLR.cc/2025/Conference/Submission9665/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a methodology for verifying agreement in neural network that try to approximate the same target function.\\nThe proposed method focuses on the predicted likelihood-ratio between two classes (denoted as OM). In particular, it gives local bounds on the difference in log OM between the two networks (log LROM). The bounds are given for local neighborhoods, assuming that the correct label is known and constant in the region. The bounds on log LROM are obtained relaxing the exact optimization objective to an approximate one, solvable with linear programming. \\nThe paper evaluates the proposed approach on a variety of scenarios, comparing distillation and quantization techniques, as well as evaluating the robustness of adversarial trained networks.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well structured, clear and straight to the point. The contribution is well laid out, and the examples for the empirical evaluation are rich, relevant and diverse.\", \"weaknesses\": \"The proposed metric of ROM represents the difference in prediction confidence, and does not necessarily relate to the agreement of the predictions. The LROM metric is only informative when the bounds do not contain zero. Moreover, even assuming a point has verifiable LROM, it might not be verifiable in the desired direction.\\n\\nMoreover, the correctness of the LROM metric depends on the assumption that the correct label remains constant in the considered region. Therefore it is only sound for small enough neighborhoods. Looking at the empirical results, in many scenarios, the percentage of verifiable points rapidly drops to zero as the neighborhood size increases. \\nIt is unclear if the proposed approach provides a qualitatively different result than a direct pointwise comparison of model predictions.\", \"questions\": \"The authors point out that having a strictly positive/negative LROM between two networks is a sufficient condition to ensure that one makes better predictions than the other. I understand that ROM as formulated has the advantage of being invariant to the choice of threshold, but I find it paints an incomplete picture, and can be misleading. In particular, two networks can have 100% compatible predictions, yet the ROM value could vary significantly. Similarly, the same ROM can represent drastically different situations. In fact, if the two networks have log OM of +0 and +10, we are comparing a coin toss to an extremely accurate predictor. This is not the same as having two extremely confident networks with log OM of +1000 and +1010, but both situations have log ROM of +10.\\n\\nThe discussion on the effectiveness of adversarial training is interesting, however I can't seem to find any plot/table of the results described in the main text.\\n\\nI also would be interested in a more in depth discussion of the effects of the $\\\\delta$ parameter on the metric. Clearly for small enough values, the approach reduces to a pointwise comparison of model predictions. To justify the approach, it should be shown (at least empirically) that evaluating LROM gives significantly different results.\\n\\nThe choice of the correct $\\\\delta$ seems critical. If it is too large, the metric becomes meaningless and the number of verifiable points is likely to drop. Can you provide (at least) some heuristic to quantify a good value for $\\\\delta$?\\nAlso, have you considered making it a function of $x$, trying to estimate the largest local region where the correct label remains constant?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper is interested in certifying the output of a given network w.r.t another one, targeting use-cases like pruning/distillation/quantization, in the goal of demonstrating that the pruned/distilled/quantized network exhibits not only similar accuracy over the train set, but even consistent decisions around the same local regions around each training example.\\n\\nTo do so, they introduce a novel measure called Relative Output Margin (ROM), with is the ratio between the Output Margin (OM) of two networks at a given input point. Finally, by taking the minimum of the ROM over a whole infinity-norm ball centered around a given point, they define the Local ROM (LROM).\\n\\nComputing the exact LROM is a hard problem, but its linear relaxation for feed-forward ReLU networks is tractable and yields a lower bound, which is sufficient for the purpose of certification. \\n\\nThe algorithm is tested in the context of pruning, quantization, and distilled networks, on two images datasets (MNIST, CIFAR-10), and two tabular/signal datasets (CHB-MIT and MIT-BIH).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"### Originality\\n\\nIt think the main originality of the paper lies in considering the joint optimization of the margins of two networks. Intuitively, one can understand why this is a superior approach compared to optimizing separately the bounds and trying to aggregate them (although the optimization problem is now double the dimension), as shown in Figure 2.\\n\\nFocusing on the context of pruned/distilled/quantized networks is also relevant, re-targeting the (sometimes too ambitious) conventional goal of certifying a given network into just showing that some networks is not much worse than an other.\\n\\n### Clarity and soundness\\n\\nThe paper is clear overall, proofs look correct.\", \"weaknesses\": \"### Novelty\\n\\nMy main source of concern is the novelty. Computing certificates for ReLU-based networks is a well established methodology that relies on various linear relaxation (as used in this paper), or interval propagation. \\n\\nThis paper utilizes these tools, and the only novelty is to consider a joint optimization of the difference of logits for two networks, instead of a single one, which I consider a straightforward extension departing from existing methods.\\n\\nTherefore the main contribution of the paper is introducing the OM, ROM and LROM measures, and using the whole Section 2 to deal with rather trivial considerations.\\n\\nAll the proofs of appendix A.1 are a trivial consequence of manipulating the log of probability ratios, to end-up with a simple difference of logits. This is colloquially called the *margin*, not to be confused with the Output Margin (OM) measure that authors introduce, without clear motivation.\\n\\nIn most papers for NN certification, this margin is analyzed, reported or even optimized (using Hinge loss) in a straightforward manner, without bothering highlighting the link with output probabilities. In this regard, the theoretical contribution appears rather shallow IMHO. \\n\\nDropping this narrative would even allow to use the method of the paper outside the context of classification. Even the concept of \\u201ctwin networks\\u201d looks overkill to simply describe networks operating over the same input/output spaces.\", \"questions\": \"1) LROM is assymmetric, i.e inverting DNN N1 and N2 yields a different bound. Can you comment on this property? Is this something desirable? What if LROM(N1, N2) equals some value, and LROM(N2, N1) equals another, is there something to interpret here (qualitatively)?\\n\\n2) When LROM is very negative (i.e logits difference is huge) which implies that probabilities are near-zero, not too far away from machine precision zero, do you expect this value to carry meaningful information?\\n\\n3) Can you clarify use-cases in which LROM is useful for practioner?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"I am pretty new to this field so I will try to summarize what I understand to be the key contributions of this paper. If I am wrong, please do let me know in the comments:\\n\\n1. The authors introduce a new way to compare two neural networks (e.g. an original and a compressed version of the same net) by looking at their \\\"relative output margins\\\" = essentially comparing how confidently they make the same decisions.\\n\\n2. They provide a formal verification framework that can prove, within a small neighborhood of a given input (like a small perturbation of an image), that one network will always make decisions at least as confidently as another network when they're both correct.\", \"they_demonstrate_this_is_practically_useful_when\": \"a. Comparing original networks with their pruned/quantized/distilled versions\\nb. Analyzing medical AI systems where reliability is crucial\\nc. Understanding the relationship between regular and adversarially-trained models\\n\\nThe key innovation is that instead of trying to verify properties across all possible inputs (which would be intractable), they focus on small, local neighborhoods around specific inputs and use linear programming techniques to efficiently compute provable bounds on the networks' relative behavior in these regions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The problem the authors are addressing is import and relevant:\\nHow to formally compare two neural networks' decision confidence across many examples and not only at the given evaluation datapoints. This is especially relevant when we modify networks via e.g. quantization and pruning but need guarantees (in high stakes settings such as medicine).\\n\\nThe linear programming formulation makes the solution practical, which is good -- imho a method that can be practically applied is a key to wide adoption and impact.\\n\\nI also appreciate the comparison to adversarially trained models.\", \"weaknesses\": \"I have a few concerns:\\n\\n1. Scalability\\n\\n1.1. linear programming can be expensive -- how well does this scale to larger networks?\\n1.2. small networks and small regions are shown in the paper. How well would this do on e.g. a ~100M parameter ViT and with larger regions?\\n\\n2. How tight are the bounds? Do you have any experiments to demonstrate that? I would also be great to discuss worst-case scenarios with examples and develop some kind of a rudimentary case study of that.\\n\\n3. Small regions\\nThe small perturbation sizes used (0.001, 0.01) may not reflect real-world distortions. If I add a bit of a Gaussian noise to the whole image, I can easily get much higher delta. \\n\\n4. Comparison to other linear programming based methods\\nI think there are other methods that use local approximations to calculate the difference between networks (I might be wrong on this), yet you don't compare to them?\\n\\n5. Medical Application Claims\\nIt would be good to compare your estimates to some sources of ground truth. Is that feasible?\", \"questions\": \"Included in weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I believe my concerns might have been misunderstood, as the comment does not address them.\\nI will try to clarify what I meant and why the replies failed to change my opinion.\\n\\n> ROM represents relative difference in prediction confidence, but it is related to the agreement of the predictions. \\n[...]\\nOur framework provides guaranteed sound bounds in such cases.\\n\\nIn my original comment I tried to point out that higher/lower OM does not relate to agreement in predictions. There are 4 scenarios: both models predict the correct class, both models predict the incorrect class, the models predict different classes.\\nThe OM value, only serves to exclude one of the disagreement scenarios. Therefore, independently of OM, the models can disagree, agree and be correct, agree and be incorrect.\\n\\n> We would like to also highlight that we do not make \\u201cthe assumption that the correct label remains constant in the considered region\\u201d. \\n[...] \\nFor (-,+), LROM will be negative. For (-,-), positive LROM shows that OM(N1)>OM(N2).\\n\\nI understand that you do not need the assumption that the label remains constant in the domain to calculate the metrics. However, if the label is not constant in the domain the metrics you calculate are meaningless. \\n\\n>In addition, our experiments show that our approach is substantially more informative/accurate than looking individually (i.e., direct pointwise comparison of model predictions) into robustness properties of each network (\\u201cComparison with Independent Analysis\\u201d and Figure 2 in Section 4.3).\\n\\nI believe the Figured referenced does not show what I asked. It shows how the joint analysis gives tighter bounds than the independent one. In the limit for $\\\\delta=0$, the approach reduces to simply looking at the network predictions at a single point.\\nThe claims of robustness should be evaluated by comparing the proposed approach to this much simpler and naive method. \\n\\n> Consider OM(N1)=0 and OM(N2)=10, which leads to [...] if LROM(N3,N2)>0, then we know that LROM(N4,N1)>0, since we already know LROM(N2,N1)>0 and LROM(N4,N3)>0.\\n\\nI understand that relative metrics are transitive. However my original comment was meant to highlight how this metric can be misleading. In your example, you consider the case when the scenarios with equal LROM and different logits are happening on different networks. My comment referred to evaluating LROM for different points but for the same networks. Consider a distilled network N1 and a baseline network N2. If LROM(N1)=1000 and LROM(N2)=1010, then i conclude that the distilled network is consistently underconfident compared to baseline. But does this matter? Does it make sense to treat this scenario the same as LROM(N1)=0 and LROM(N2)=10? \\n\\n>In our experiments (Section 4.3), we consider the adversarially-trained networks and non-defended networks and show that our framework captures that adversarially-trained models have larger LROM than the non-defended one.\\n[...]\\nThe value of $\\\\delta$ is application-dependent, e.g., network architecture and dataset. One potential approach to estimate the largest local region (captured by $\\\\delta$) is to iteratively perform our framework, similar to binary-search on $\\\\delta$.\\n\\nThis results mentioned are only reported in the main text. The comparison to the limit for $\\\\delta=0$ should be included in all results as a baseline. This ties to my other comments on the value of $\\\\delta$. The proposed method has merits only if you are able to verify a large enough proportion of points for a large enough local radius, but not so large that the label cannot be assumed constant in the local region. This cannot be treated as a hyperparameter of secondary importance.\\n\\nI'm afraid that the authors' replies to some of the concerns I've raised only strengthen my confidence in the original score. \\nWhile I personally find the work and ideas interesting, I believe that the claims are not substantiated by a thorough evaluation. Moreover, the theoretical contributions offered are not sufficient to warrant acceptance by themselves.\"}", "{\"comment\": \"We acknowledge the reviewer\\u2019s efforts in assessing our work and are grateful for their feedback. We have addressed the comments and inquiries below.\\n\\n\\n>The LROM optimization framework requires handling complex linear programming tasks, which may limits calability for larger networks. It would be better to test the framework further on larger neural networks, suchl anguage models.\\n\\nWe would like to highlight that this limitation is already explicitly mentioned in our paper (Lines 533\\u2013535). Scalability is indeed a common challenge in the entire formal verification area (not particular to this work) [1], but it is the price to pay for providing formal guarantees, which is necessary in safety-critical applications, e.g., in the medical domain. \\n\\nThat being said, we have based our experiments for the MNIST and CIFAR-10 datasets on neural networks sourced from the state-of-the-art studies in verification of neural networks [1, 2] in Sections 4.3.1 and 4.3.2.\\nThe neural networks for the CHB-MIT and MIT-BIH datasets in Sections 4.3.3 and 4.3.4 are CNNs, adopted from a recent ICML paper [3]. \\n\\n\\n[1] Proof transfer for fast certification of multiple approximate neural networks, Proceedings of the ACM on Programming Languages (PACMPL), 2022.\\n\\n[2] An abstract domain for certifying neural networks, Proceedings of the ACM on Programming Languages (PACMPL), 2019.\\n\\n[3] VNN: Verification-friendly neural networks with hard robustness guarantees. In the International Conference on Machine Learning (ICML), 2024.\\n\\n\\n\\n>It may be challenging to interpret the evaluated measures due to the technical intricacies involved in LROM computation.\\n\\nWhile we understand the reviewer\\u2019s point of view, this is the nature of formal methods and verification domains, where the properties need to be defined and proved formally to be able to provide hard guarantees. In our final version, should the space permit, we will provide more intuitions for the formal content.\\n\\n\\n>It would be better to conduct a comparison with existing adversarial robustness metrics and methods. How do the proposed measures differ from the existing adversarial robustness measures?\\n\\nAdversarial robustness metrics are designed to quantify a model's ability to withstand adversarial attacks, but they do not guarantee the results. On the other hand, formal verification provides sound results, meaning it guarantees the extent to which a network is robust against perturbations. Our work is based on formal verification, where we provide formal guarantees on the relationship between the output margins of two networks.\\n\\nIn our experiments (Section 4.3), we consider the adversarially-trained networks and non-defended networks and show that our framework captures that adversarially-trained models have larger LROM than the non-defended one. This is while both networks have similar \\u201caccuracy\\u201d and \\u201ccertified accuracy\\u201d (i.e., the proportion of test samples for which a model is guaranteed to remain robust, i.e., correctly classify, within a specified perturbation radius). As such, our experiments show that LROM provides more insight into the difference between two networks, than \\u201caccuracy\\u201d and \\u201ccertified accuracy\\u201d.\\n\\nOn the other hand, currently, there is no adversarial technique to perform such analyses jointly (for the same perturbation) for two networks. At the same time, performing the analysis independently would lead to excessive over-approximation and lead to inconclusive results (as shown in our experiments). \\n\\nOur experiments show that our approach is substantially more informative/accurate than looking individually into robustness properties of each network (\\u201cComparison with Independent Analysis\\u201d and Figure 2 in Section 4.3).\\n\\n\\n>What would be the applications of the framework beyond network twins (similar architectures with compact versions), such as varied neural architectures?\\n\\nOur framework and approach is by no means limited to \\u201csimilar architectures with compact versions\\u201d. The only requirement is to have the same input/output domains and that is necessary (because we need to check the outputs of the two networks for the same input). Please see Lines 042-045: \\u201cIn this work, we focus on neural network twins, i.e., two neural networks trained for the same learning/classification task, with the same input and output domains, but not the same weights and/or architectures.\\u201d\\n\\n\\nWe thank the reviewer for their time. We'd appreciate knowing if our clarifications on contribution have satisfied you and if our clarified perspective might improve your evaluation of our work and your confidence in our work.\"}", "{\"comment\": \"We thank the reviewer for their review. We have addressed the comments below.\\n\\n>The proposed metric of ROM represents the difference in prediction confidence, and does not necessarily relate to the agreement of the predictions. The LROM metric is only informative when the bounds do not contain zero. Moreover, even assuming a point has verifiable LROM, it might not be verifiable in the desired direction.\\n\\nROM represents relative difference in prediction confidence, but it is related to the agreement of the predictions. If LROM of N1 w.r.t. N2 is positive, it means that the OMs of N1 are consistently higher than the OMs of N2 and that is in *all directions* in the neighborhood we are considering.\\n\\nEven when LROM is negative, it still provides us with an informative quantitative measure. For instance, the compact models may not always be as good as their original counterparts. Therefore, the requirements from the application might be that the margins in the compact network may not be less than a certain percentage (80%) of the original network\\u2019s margins (i.e., negative LROM is accepted, but not less than a certain limit). Our framework provides guaranteed sound bounds in such cases.\\n\\n>Moreover, the correctness of the LROM metric depends on the assumption that the correct label remains constant in the considered region. \\u2026 It is unclear if the proposed approach provides a qualitatively different result than a direct pointwise comparison of model predictions.\\n\\nWe would like to also highlight that we do not make \\u201cthe assumption that the correct label remains constant in the considered region\\u201d. Regardless of whether the correct label remains constant or not, our approach provides information about the LROM of two networks, indicating which network has a higher LROM than the other. Consider for example four cases for the margins of N1 and N2: (OM(N1),OM(N2))=(+,+), or (+,-), or (-,+), or (-,-). For (+,+), LROM will be positive if OM(N1)>OM(N2). For (+,-), LROM will be positive. For (-,+), LROM will be negative. For (-,-), positive LROM shows that OM(N1)>OM(N2).\\n\\nIn addition, our experiments show that our approach is substantially more informative/accurate than looking individually (i.e., direct pointwise comparison of model predictions) into robustness properties of each network (\\u201cComparison with Independent Analysis\\u201d and Figure 2 in Section 4.3).\\n\\n>... In fact, if the two networks have log OM of +0 and +10, we are comparing a coin toss to an extremely accurate predictor. This is not the same as having two extremely confident networks with log OM of +1000 and +1010, but both situations have log ROM of +10.\\n\\nConsider OM(N1)=0 and OM(N2)=10, which leads to LROM(N2,N1)=10. This is a *relative* condition between N1 and N2. \\n\\nConsider OM(N3)=1000 and OM(N4)=1010, which leads to LROM(N4,N3)=10. This is a *relative* condition between N3 and N4.\\n\\nTherefore, based on LROM(N2,N1) and LROM(N4,N3), we cannot draw any conclusions for (N1 or N2) vs (N3 or N4). However, if LROM(N3,N2)>0, then we know that LROM(N4,N1)>0, since we already know LROM(N2,N1)>0 and LROM(N4,N3)>0.\\n\\n\\n\\n>The discussion on the effectiveness of adversarial training is interesting, however I can't seem to find any plot/table of the results described in the main text.\\n\\nDue to the page limit, we only described the results and were unable to include a table/plot. Below is the summary of the results:\\n\\n| |Non-defended|PGD1|PGD3|\\n|-|-|-|-|\\n| Non-defended|-|0%|0%|\\n| PGD1|57%|-|24%|\\n| PGD3|56%| 38%|-|\\n\\n>... Clearly for small enough values, the approach reduces to a pointwise comparison of model predictions. To justify the approach, it should be shown (at least empirically) that evaluating LROM gives significantly different results.\\n\\nIn our experiments (Section 4.3), we consider the adversarially-trained networks and non-defended networks and show that our framework captures that adversarially-trained models have larger LROM than the non-defended one. This is while both networks have similar \\u201caccuracy\\u201d and \\u201ccertified accuracy\\u201d (i.e., the proportion of test samples for which a model is guaranteed to remain robust, i.e., correctly classify, within a specified perturbation radius). As such, our experiments show that LROM provides more insight into the difference between two networks, than \\u201caccuracy\\u201d and \\u201ccertified accuracy\\u201d.\\n\\n>...Can you provide (at least) some heuristic to quantify a good value for $\\\\delta$? \\u2026estimate the largest local region where the correct label remains constant?\\n\\nThe value of $\\\\delta$ is application-dependent, e.g., network architecture and dataset. One potential approach to estimate the largest local region (captured by $\\\\delta$) is to iteratively perform our framework, similar to binary-search on $\\\\delta$. \\n\\nWe thank the reviewer again. We'd appreciate knowing if our clarifications/results have been satisfactory and if our clarified perspective might improve your evaluation of our work.\"}", "{\"comment\": \">We have now briefly addressed the remaining parts\\n\\nYou have not.\\nI maintain my score\"}", "{\"comment\": \"We acknowledge the reviewer\\u2019s efforts in assessing our work and are grateful for their feedback. We have addressed the comments and inquiries below.\\n\\n\\n>1. Scalability\\n1.1. linear programming can be expensive -- how well does this scale to larger networks? 1.2. small networks and small regions are shown in the paper. How well would this do on e.g. a ~100M parameter ViT and with larger regions? \\n\\nWe would like to highlight that this limitation is already explicitly mentioned in our paper (Lines 533\\u2013535). Scalability is indeed a common challenge in the entire formal verification area (not particular to this work) [1], but it is the price to pay for providing formal guarantees, which is necessary in safety-critical applications, e.g., in the medical domain. \\n\\n\\nThat being said, we have based our experiments for the MNIST and CIFAR-10 datasets on neural networks sourced from the state-of-the-art studies in verification of neural networks [1, 2] in Sections 4.3.1 and 4.3.2.\\nThe neural networks for the CHB-MIT and MIT-BIH datasets in Sections 4.3.3 and 4.3.4 are CNNs, adopted from a recent ICML paper [3]. \\n\\n\\n[1] Proof transfer for fast certification of multiple approximate neural networks, Proceedings of the ACM on Programming Languages (PACMPL), 2022.\\n\\n[2] An abstract domain for certifying neural networks, Proceedings of the ACM on Programming Languages (PACMPL), 2019.\\n\\n[3] VNN: Verification-friendly neural networks with hard robustness guarantees. In the International Conference on Machine Learning (ICML), 2024.\\n\\n\\n>2. How tight are the bounds? Do you have any experiments to demonstrate that? I would also be great to discuss worst-case scenarios with examples and develop some kind of a rudimentary case study of that. \\n\\nWhile the tightness of the bounds is interesting, we would like to highlight that our approach is *sound*. This means that, when our framework concludes that a network consistently makes a correct decision every time the other network does, in the entire input region, the conclusion is guaranteed to be correct. \\n\\nIn terms of tightness, our experiments show that our approach is substantially more accurate/tight than looking individually into robustness properties of each network (\\u201cComparison with Independent Analysis\\u201d and Figure 2 in Section 4.3).\\n\\n\\n>3. Small regions The small perturbation sizes used (0.001, 0.01) may not reflect real-world distortions. If I add a bit of a Gaussian noise to the whole image, I can easily get much higher delta. \\n\\nOur experiments show that despite small regions, our framework can provide more insight into the difference between two networks, even with such small regions. In our experiments (Section 4.3), we consider the adversarially-trained networks and non-defended networks and show that our framework captures that adversarially-trained models have larger LROM than the non-defended one. This is while both networks have similar \\u201caccuracy\\u201d and \\u201ccertified accuracy\\u201d (i.e., the proportion of test samples for which a model is guaranteed to remain robust, i.e., correctly classify, within a specified perturbation radius). As such, our experiments show that LROM provides more insight into the difference between two networks, than \\u201caccuracy\\u201d and \\u201ccertified accuracy\\u201d.\\n\\n>4. Comparison to other linear programming based methods I think there are other methods that use local approximations to calculate the difference between networks (I might be wrong on this), yet you don't compare tothem? \\n\\nTo the best of our knowledge, this is the first work aiming to formally compare two neural networks jointly. We are happy to make such comparisons should the reviewer provide a reference.\\n\\nAs discussed earlier, in our experiments, we have performed a comparison with performing such analysis independently and shown that our approach is substantially more accurate/tight than looking individually into robustness properties of each network (\\u201cComparison with Independent Analysis\\u201d and Figure 2 in Section 4.3).\\n\\n\\n>5. Medical Application Claims It would be good to compare your estimates to some sources of ground truth. Is that feasible?\\n\\nWe have made several experiments on two well-established medical applications, with ground-truth labels provided by medical experts. Our bounds, as mentioned before, are sound and provably correct (formally verified). Therefore, when our framework concludes that a network consistently makes a correct decision every time the other network does in the entire input region, the conclusion is guaranteed to be correct. \\n\\nWe thank the reviewer for their time. We'd appreciate knowing if our clarifications on contribution have satisfied you and if our clarified perspective might improve your evaluation of our work and your confidence in our work.\"}", "{\"comment\": \"We thank the reviewer for their response and their decision to increase the score of our paper.\"}", "{\"comment\": \"Thank you for your reply. However, my concerns on theorem, empirical study, and limited datasets have not been fully addressed by potential insights of LROM on robustness. I would like to keep my rating.\"}", "{\"comment\": \"Thank you for the responses!\\n\\n>In my original comment I tried to point out that higher/lower OM does not relate to agreement in predictions. ... The OM value, only serves to exclude one of the disagreement scenarios. ...\\n\\nThe state of the art techniques in the verification area [1, 2] consider only correctly classified samples, to examine how networks behave in a perturbation region surrounding an actual point that is proven to be correctly classified. This is because if at least one of the networks makes an incorrect decision (negative margins), comparing the prediction of the networks is sufficient. Therefore, the only interesting scenario would be when both networks correctly classify the sample. The challenge is to compare the behaviors of the two networks in a neighborhood of a point where they both make a correct decision.\\n\\nWe emphasize our claim in this paper (Lines 62-64): \\u201cLROM enables us to formally prove that a network consistently makes a correct decision every time the other network does, and it does so in the entire input region.\\u201d \\n\\nLet us reiterate the definition of LROM(N1,N2): min{OM(N1)-OM(N2)} on an entire neighborhood.\\n\\nTo concretize the claim, suppose we could establish LROM(N1, N2) > 0 on a neighborhood around a sample \\u201cp\\u201d: \\n\\n* suppose a sample \\u201cs\\u201d in the neighborhood of \\u201cp\\u201d is classified incorrectly by both networks. We still have OM(N1) > OM(N2) for \\u201cs\\u201d, which means that N1 is closer to correctly classifying \\u201cs\\u201d than N2. So you can trust N1 if you did trust N2.\\n\\n* If a sample \\u201cs\\u201d in the neighborhood of \\u201cp\\u201d is classified correctly by N1, i.e., OM(N1)>0, and incorrectly by N2, i.e., OM(N2)<0, then again you can trust N1 if you did trust N2 as N1 is correct despite N2 making an incorrect prediction. \\n\\n* If a sample \\u201cs\\u201d in the neighborhood of \\u201cp\\u201d is classified incorrectly by N1 and correctly by N2, then N1 is doing worse than N2. Since N1 classifies the \\u201cs\\u201d incorrectly, then OM(N1)<0. Also, N2 classifies the sample \\u201cs\\u201d correctly implies OM(N2) > 0. As a result, the lower bound LROM(N1,N2) on the neighborhood has to be negative, which is excluded by our assumption LROM(N1,N2) > 0.\\n\\n* If sample \\u201cs\\u201d in the neighborhood is classified correctly by both networks with LROM(N1,N2)>0, then we know that the ratio of probabilities associated to the correct decision for \\u201cs\\u201d by N1 is larger than the ratio of probabilities associated by N2 for the same correct decision on \\u201cs\\u201d. Hence, you can trust N1. \\n\\nAgain, showing LROM(N1,N2)>0 is useful because it means N1 (the network checked against a reference N2) is at least as correct as N2 (the reference network), even when margins of N1 and N2 are negative. We do not aim to show that N1 or N2 make the correct decisions on a neighborhood (i.e., have a constant label). That can be easily checked separately for N1 and N2 on the same neighborhoods using existing works. Our approach can show, on a whole neighborhood, that an implication holds. Namely, each time N2 (i.e., the reference network) makes the correct decision then N1 (the checked network) does also make the correct decision. Indeed, this excludes the case where the checked network N1 makes (in some points in the neighborhood) the wrong decision despite the reference network N2 making the right one. That is the whole point! If you trust the reference N2 enough (even in regions where it is not robust), then our framework can tell you N1 will make as good as or better decisions.\\n\\n\\n[1] An abstract domain for certifying neural networks, Proceedings of the ACM on Programming Languages (PACMPL), 2019.\\n\\n[2] VNN: Verification-friendly neural networks with hard robustness guarantees. In the International Conference on Machine Learning (ICML), 2024.\\n\\n> ... However, if the label is not constant in the domain the metrics you calculate are meaningless.\\n\\nWe do not agree the metric is meaningless when the labels are not constant. Let us reiterate the definition of LROM(N1,N2): min{OM(N1)-OM(N2)} on an entire neighborhood. Note that OM(N1) and OM(N2) exactly capture the actual predictions by network N1 and N2, respectively. If OM(N1)>0, the prediction is correct by N1. Similarly, If OM(N2)>0, the prediction is correct by N2. Therefore, if LROM(N1,N2)>0, this means that OM(N1)>OM(N2). Therefore, if you trust N2, then you can trust N1.\\n\\n>My comment referred to evaluating LROM for different points but for the same networks. \\u2026\\n\\nWe would like to highlight that our work is to find **relative margins between two neural networks**, given a common input point, as the title of our paper suggests. The example from the reviewer discusses **relative margins between two input points**, which is a claim we have not made.\\n\\nThank you for taking the time to review our work. We would appreciate your feedback on whether our clarifications regarding the contribution have addressed your concerns and whether they might positively influence your evaluation and confidence in our work.\"}", "{\"metareview\": \"The paper introduces a framework for comparing two neural network classifiers with identical input and output domains by quantifying the Relative Output Margin (ROM). ROM measures the consistency and correctness of one network's decisions relative to another over a specified input region. The framework provides provably correct bounds on ROM gains or losses, offering a formal verification method for assessing decision quality across input regions. The authors demonstrate the framework's applicability using datasets such as MNIST, CIFAR-10, and two real-world medical datasets.\\n\\nThe reviewers found the question studied important, the proposed method solid and sound, and the paper well-written. They were concerned about the scalability of the proposed linear programming method, though this is a notoriously hard challenge in formal verification of NNs. Additionally, some reviewers raised questions about the meaning and significance of the LROM measure itself. Despite several rounds of discussion, these concerns were not convincingly addressed. The authors are encouraged to carefully consider these comments to refine the methodology and improve the presentation, ensuring a more convincing and clear argument for future readers.\", \"additional_comments_on_reviewer_discussion\": \"To keep this summary short I will focus on issues that I think are the most crucial, which are about how meaningful the ROM measure is. Reviewer eQgt pointed out that \\\"Similarly, the same ROM can represent drastically different situations. In fact, if the two networks have log OM of +0 and +10, we are comparing a coin toss to an extremely accurate predictor. This is not the same as having two extremely confident networks with log OM of +1000 and +1010, but both situations have log ROM of +10.\\\" The authors' response essentially repeats the definition of OM but does not directly address this conceptual question. A few other concerns/questions about the meaningfulness of ROM raised by Reviewers eQgt and tfBm similarly did not get resolved during the discussion phase.\"}", "{\"comment\": \"At this point I am convinced the authors are deliberately avoiding addressing the questions (both mine and of other reviewers).\\n\\nI will summarize briefly.\\n\\nTo address my comment that LROM does not relate to agreement in prediction, you say that it does when you are only considering cases where both networks make the correct decision. I agree, and that is exactly the concern. It only is useful when you trust a reference AND you can bound the ROM on the correct side. \\nI understand that it is useful, but you should recognize that its applicability is limited to specific scenarios.\\n\\nYou do not address my comment that the label must be constant in the local region. You say that OM>0 means that the prediction is correct. This is true if the true label is constant. Even if you trust a reference, you can only trust a network with positive LROM **if** you are confident the true label is not changing in the region.\\n\\nI never claimed you are comparing margins between two points. I simply pointed out how the same result of the evaluation (same LROM between the networks) may represent drastically different scenarios. Moreover, you are comparing LROM at multiple points, and treating all equally, which provides misleading global aggregate metrics.\\n\\nThe other points I have raised have been completely ignored.\\n\\nGiven the (likely deliberate) avoidance in addressing the comments and the little time remaining, I don't see this discussion going any further.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This manuscirpt define the notion of Relative Output Margin (ROM) and Local ROM (LROM) to compare network twins. Specifially, LROM > 0 means one network can consistently outperform another one in the vicinity of a given points. a theorem is provided the bound the LROM. Experiments on for datasets with 7-layer MLP are conducted to show the effectiveness of proposed LROM.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"A novel notion of Relative Output Margin is proposed.\", \"The organization (not writing) is very well.\", \"Experiments on multiple experiments are presented to show the interesting property of ROM.\"], \"weaknesses\": [\"## Major\", \"The conclution is unclear.\", \"In my opinion, the main theorem (Theorem 3.1) solely establish that there indeed exists upper and lower bounds for any LROM. If it is, the significance of this theorem is limited and LROM cannot be linked to the generalizability of DNNs.\", \"There is no clear conclusion for experiments. What do the experiments show?\", \"In the experiments, only a small 7-layer MLP is used, which hardly give a good predition for CIFAR-10, which makes a limited contribution. Can the authors provides the results at least on CNNs like VGG or ResNet?\", \"In the experiments, \\\"We exclusively focus on **correctly** classified samples\\\" (Line 299). For me, a big application for ROM is to measure the uncertainty of predictions. If we have already know that the predictions is correct, there is no point to use this technique.\", \"## Minor\", \"The writing can be improved.\", \"In Introduction, more words are needed to breifly introduce the theoretical and empirical work ( now only 6 lines from Line 61-66), which will make this paper more clear.\", \"The notation are too complicated and can be simplified.\", \"It is better to number equations.\", \"use $\\\\max$ and $\\\\min$ to replace $max$ and $min$ in the equations.\", \"Line 215: \\\"$-\\\\mathcal{R}\\\\geq \\\\mathcal{M}$\\\" -> \\\"$-\\\\mathcal{R} \\\\geq -\\\\mathcal{M}$\\\"\"], \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the authors' responses! They have addressed most of my concerns. Thus, I would like to increase my score accordingly.\"}", "{\"comment\": \">At this point I am convinced the authors are deliberately avoiding addressing the questions (both mine and of other reviewers). I will summarize briefly.\\n\\nWe focused on the main criticisms for brevity and for discussion coherence. \\n\\n>To address my comment that LROM does not relate to agreement in prediction, you say that it does when you are only considering cases where both networks make the correct decision. I agree, and that is exactly the concern. It only is useful when you trust a reference AND you can bound the ROM on the correct side. I understand that it is useful, but you should recognize that its applicability is limited to specific scenarios.\\n\\nThis is inaccurate. Canonical definition of local robustness requires networks to maintain constant labels locally. Networks need not maintain constant predictions in general. If you show LROM(N1,N2)>0, then you know N1 will make the correct prediction each time N2 does. In our previous response, we analyze all four cases (for N1/N2 making correct/incorrect decisions) and show LROM is meaningful, despite the signs of OMs. We refer the reviewer to our previous answer.\\n\\n>You do not address my comment that the label must be constant in the local region. You say that OM>0 means that the prediction is correct. This is true if the true label is constant. Even if you trust a reference, you can only trust a network with positive LROM if you are confident the true label is not changing in the region.\\n\\nIndeed, there has been a misunderstanding about predicted labels and actual / ground-truth labels. **Local robustness** requires constant labels on considered balls (typically centered on correctly labeled samples). As the reviewer certainly agrees, we did not invent it. **LROM (Local Relative Output Margins) is also local**. However, in case two networks do not have constant predictions throughout a ball, we do not give up the comparison like current techniques. If you show LROM(N1,N2) > 0 (observe we can choose other thresholds for flexibility), then you know N1 will make the correct prediction (matching the should-be-constant-label) each time N2 does. If you use N2 as a reference (despite its occasional violations of local robustness) then you know N1 maintains the correct label at least each time N2 does. \\n\\n>I never claimed you are comparing margins between two points. I simply pointed out how the same result of the evaluation (same LROM between the networks) may represent drastically different scenarios. Moreover, you are comparing LROM at multiple points, and treating all equally, which provides misleading global aggregate metrics.\\n\\nThat is what we understood from reviewer's previous comment: _\\u201cMy comment referred to evaluating **LROM for different points but for the same networks**. \\u2026\\u201d_\\n\\nLROM(N1,N2) is each time computed for a single ball for both N1 and N2 (just like local robustness is checked for a network, a single ball at a time).\\n\\nWe do not average the margin differences over the samples, rather we count the number of samples for which LROM(N1,N2)>0 vs LROM(N2,N1)>0. Therefore, the claim is not well founded. \\n\\n>The other points I have raised have been completely ignored.\\n\\nWe now discuss the delta aspect, which we have not addressed in our previous response for discussion coherence and for space limitation. \\n\\n>I believe the Figured referenced does not show what I asked. It shows how the joint analysis gives tighter bounds than the independent one. In the limit for $\\\\delta=0$, the approach reduces to simply looking at the network predictions at a single point.\\nThe claims of robustness should be evaluated by comparing the proposed approach to this much simpler and naive method.\\n\\n>The results mentioned are only reported in the main text. The comparison to the limit for $\\\\delta=0$ should be included in all results as a baseline. This ties to my other comments on the value of $\\\\delta$. The proposed method has merits only if you are able to verify a large enough proportion of points for a large enough local radius, but not so large that the label cannot be assumed constant in the local region. This cannot be treated as a hyperparameter of secondary importance.\\n\\nFirst, we only have the results in the text because other forms would not add much.\", \"we_maintain_that_pointwise_comparison_is_different_from_pointwise_guarantees_on_neighborhoods\": \"it only concerns specific samples. Borrowing from Dijkstra's statement that testing may only exhibit bugs, never show their absence, we highlight that working with concrete points can only hope to find counter-examples to \\\"checked N1 makes the right decision each time reference N2 does\\\". You cannot show their absence.\\n\\n>Given the (likely deliberate) avoidance in addressing the comments and the little time remaining, I don't see this discussion going any further.\\n\\nWe have initially focused the discussion on LROM for brevity and for discussion coherence. We have now briefly addressed the remaining parts.\"}", "{\"comment\": \"We disagree with the statement that our approach equates a \\u201ca coin toss\\u201d and \\u201can extremely accurate predictor\\u201d because 1010-1000=10-0 in terms of logits . Both correspond to extremely accurate predictors if you consider the softmax layer (as is common in softmax based classifiers). Indeed, output margins are defined on the output of the softmax layer, NOT on the logits before the softmax layer. Although it may seem that if the output margins of one pair of networks with logits 0 and 10 differs from another pair with logits of 1000 and 1010, the output margins for both networks are exactly the same (0.5% and 99.5%). Therefore, when comparing the relative output margin after the softmax layer, it does not matter whether the logits are 0-10, 1000-1010 or 1000000-1000010.\\n \\nThat being said, we could as well focus on the logits instead of the softmax outputs. \\n \\nThe fundamental question we try to answer is however \\u201cGiven two compatible networks and an input region, is it possible to formally prove that one network consistently makes a correct decision every time the other network does, in the entire input region.\\u201d \\n \\nWe thank the reviewers and chairs for their efforts. We will strive to better clarify the contributions in the future.\"}", "{\"comment\": \"Thank you for your response!\\n\\nWe would like to highlight that the reviewer has not responded to part of our rebuttal and their previous claim that LROM is a \\u201cvery ad-hoc\\u201d measure. We consider our previous answer has been satisfactory.\\n\\n> You are wasting a full page on it without clear advantage for the reader.\\n\\nYet the point is still misunderstood as witnessed by the comments about the ad-hoc nature of LROM. The \\u201ctrivial consequence of manipulating the log of probability ratios [quoted from the reviewer]\\u201d, are just included for completeness, in the supplementary material, which has no limit. \\n\\n>I think you are really onto something with this last remark. It seems that this is a concern shared by other reviewers. Overall, it seems that we are likely unconvinced by LROM alone. If you argue that it \\\"can be easily checked separately for N1 and N2 on the same neighborhoods using existing works\\\" then I believe this is your responsibility to use these \\\"existing works\\\" and combine the results with LROM to really check that this additional information brings more insights than the raw certifiable robustness of N2 in the neighborhood of each train/test example.\\n\\n>If you perform these experiments in a further re-submission it could make a compelling case for your work.\\n\\nWhat can be easily checked with existing work is to independently ask whether a network is locally robust. Current approaches are too coarse to check the implication (N1 makes a correct prediction each time N2 has does a correct prediction), let alone to give tight bounds on the quotient of ratios with which predictions are made.\\n\\nWe have already done experiments to compare performing such analysis independently and have shown that our approach is substantially more accurate/tight than looking individually into robustness properties of each network (\\u201cComparison with Independent Analysis\\u201d and Figure 2 in Section 4.3).\\n\\nIn addition, in our experiments (Section 4.3, \\u201cAdversarially-Trained Models\\u201d), we consider the adversarially-trained networks with PGD and non-defended networks and show that our framework captures that adversarially-trained models have larger LROM than the non-defended one. This is while both networks have similar \\u201caccuracy\\u201d and \\u201ccertified accuracy\\u201d. As such, our experiments show that LROM provides more insights into the difference between two networks, than \\u201caccuracy\\u201d and \\u201ccertified accuracy\\u201d.\\n\\nDespite the explanations in the paper and in our answers, we can only notice misunderstandings about what LROM adds compared to independently checking local robustness. We can compare networks on balls even when they are not locally robust there. We can even use thresholds to bound the ratios of their decisions wrt. to a robust prediction. No previous work does that. \\n\\nWe thank the reviewer for the comments. LROM gives insights that are not possible to get with existing work. We shall think of ways to better clarify this.\"}", "{\"comment\": \"We acknowledge the reviewer\\u2019s efforts in assessing our work and are grateful for their feedback. We have addressed the comments and inquiries below.\\n\\n> The conclution is unclear. In my opinion, the main theorem (Theorem 3.1) solely establish that there indeed exists upper and lower bounds for any LROM. If it is, the significance of this theorem is limited and LROM cannot be linked to the generalizability of DNNs.\\n\\nOur approach addresses a very fundamental and interesting gap (Lines 62-64): \\u201cLROM enables us to formally prove that a network consistently makes a correct decision every time the other network does, and it does so in the entire input region.\\u201d This property is *on an entire input region*, hence the connection to generalizability. \\n\\nOn the other hand, in our experiments (Section 4.3), we consider the adversarially-trained networks and non-defended networks and show that our framework captures that adversarially-trained models have larger LROM than the non-defended one. This is while both networks have similar \\u201caccuracy\\u201d and \\u201ccertified accuracy\\u201d (i.e., the proportion of test samples for which a model is guaranteed to remain robust, i.e., correctly classify, within a specified perturbation radius). Therefore, LROM provides more insight into generalizability and robustness.\\n\\n\\n>There is no clear conclusion for experiments. What do the experiments show?\\n\\n(1) Our experiments investigate the LROM between different models and provide examples of how our approach can be used to compare two networks (original vs distilled/quantized/pruned/VNN). This is shown in Section 4.3 and Figure 1 and Figure 3. However, they are not intended to draw conclusions about any specific model/network. \\n\\n(2) Our experiments show that LROM provides more insight into generalizability and robustness, than \\u201caccuracy\\u201d and \\u201ccertified accuracy\\u201d (\\u201cAdversarially-Trained Models\\u201d in Section 4.3). \\n\\n(3) Our experiments show that our approach is substantially more informative/accurate than looking individually into robustness properties of each network (\\u201cComparison with Independent Analysis\\u201d and Figure 2 in Section 4.3).\\n\\n\\n>In the experiments, only a small 7-layer MLP is used, which hardly give a good predition for CIFAR-10, which makes a limited contribution. Can the authors provides the results at least on CNNs like VGG or ResNet?\\n\\n(1) We have based our experiments for the MNIST and CIFAR-10 datasets on neural networks sourced from the state-of-the-art studies in verification of neural networks [1, 2] in Sections 4.3.1 and 4.3.2. \\n\\n(2) The neural networks for the CHB-MIT and MIT-BIH datasets in Sections 4.3.3 and 4.3.4 are CNNs, adopted from a recent ICML paper [3]. In addition, we have already provided further experiments with CNNs on MNIST and CIFAR-10 datasets in the appendix (Appendix A.4.1, Figures 4 and 5 in the same appendix). \\n\\n\\n[1] Proof transfer for fast certification of multiple approximate neural networks, Proceedings of the ACM on Programming Languages (PACMPL), 2022.\\n\\n[2] An abstract domain for certifying neural networks, Proceedings of the ACM on Programming Languages (PACMPL), 2019.\\n\\n[3] VNN: Verification-friendly neural networks with hard robustness guarantees, International Conference on Machine Learning (ICML), 2024.\\n\\n\\n>In the experiments, \\\"We exclusively focus on correctly classified samples\\\" (Line 299). For me, a big application for ROM is to measure the uncertainty of predictions. If we have already know that the predictions is correct, there is no point to use this technique.\\n\\nEven when we know the predictions for the given sample are correct, the reasoning in the neighborhood of the sample is still relevant. This is the fundamental setting in \\u201cadversarial examples\\u201d [1]. An input sample may be correctly classified, but small perturbations can lead to misclassification of the input sample.\\n\\nIt is common in the verification area to consider only correctly classified samples, to examine how networks behave in a perturbation region surrounding an actual point that is proven to be correctly classified [2]. However, this constraint can easily be removed from the analysis.\\n\\n[1] Adversarial examples in the physical world, Artificial intelligence safety and security, 2018.\\n\\n[2] An abstract domain for certifying neural networks, Proceedings of the ACM on Programming Languages (PACMPL), 2019.\\n\\n>Line 215: \\\"$-\\\\mathcal{R} \\\\geq \\\\mathcal{M}$\\\" -> \\\"$-\\\\mathcal{R} \\\\geq -\\\\mathcal{M}$\\\"\\n\\n\\nAfter revisiting the derivation, we are confident that the original equation presented in our paper is correct (please see the proof of Theorem 3.1 in Appendix A.1). We welcome any further discussion or clarification to ensure mutual understanding and would be happy to provide additional details or insights if needed.\\n\\n\\nWe appreciate your thoughts and hope that our response has clarified our perspective. We also hope that this clarification may lead to an improved score from the reviewer.\"}", "{\"summary\": \"This paper presents a framework for comparing two neural network classifiers with shared input and output domains. The goal is to analyze these \\\"neural network twins\\\" through the concept of Relative Output Margin (ROM), a metric indicating the confidence with which one network outperforms the other within a defined input region. Specifically, the framework formalizes and verifies \\\"Local Relative Output Margins\\\" (LROMs), allowing for the computation of provable bounds that indicate which network consistently makes correct decisions across input variations. This is crucial in applications where compact, optimized versions of networks are used, such as in medical device deployment, where safety-critical tasks like seizure and arrhythmia detection require guaranteed performance reliability.\\n\\nThe experiments in the paper evaluate the proposed Relative Output Margin (ROM) and Local Relative Output Margin (LROM) framework by testing it on four datasets: MNIST, CIFAR-10, CHB-MIT (EEG data for epilepsy detection), and MIT-BIH (ECG data for arrhythmia detection). The experiments focus on verifying LROM across different pairs of neural networks: original, pruned, quantized, and distilled versions. Across datasets, the experiments demonstrate that the LROM framework effectively captures the comparative performance and robustness of different network types under a defined perturbation range.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper\\u2019s primary contribution lies in defining and formalizing ROM and LROM. This enables a provable, quantitative comparison between neural networks for applications requiring high reliability and safety.\", \"The study evaluates the proposed framework across multiple datasets, including standard and specialized medical data, supporting the generalizability and robustness of LROM as a comparative measure.\"], \"weaknesses\": [\"The LROM optimization framework requires handling complex linear programming tasks, which may limit scalability for larger networks. It would be better to test the framework further on larger neural networks, such language models.\", \"It may be challenging to interpret the evaluated measures due to the technical intricacies involved in LROM computation.\"], \"questions\": [\"It would be better to conduct a comparison with existing adversarial robustness metrics and methods. How do the proposed measures differ from the existing adversarial robustness measures?\", \"What would be the applications of the framework beyond network twins (similar architectures with compact versions), such as varied neural architectures?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We respectfully disagree, we answered within the limited space. The reviewer has also not responded to our clarifications.\\n\\nWe are confident with our answers and are happy the discussion will be public for the community to judge.\"}", "{\"comment\": \"Thank you for your clarifications.\\n\\n> \\u201cGiven two compatible networks and an input region, is it possible to formally prove that one network consistently makes a correct decision every time the other network does, in the entire input region.\\u201d\\n\\nThe entire input regions is still characterized by a set of balls centered around the train set. Do you believe this brings better guarantees than just monitoring the certified robustness (using labels) over the train set?\\n\\n> The manipulation referred to by the reviewer allows us to perform this analysis without any approximation (in the objective function) and in a formally sound fashion\\n\\nSorry, but I believe this manipulation is rather straightforward.\\n\\n> We use output margins, i.e. margin in the output domain, not to be confused with input margins or margin (e.g., as in SVMs). [...] Optimization of margins is outside the scope of this work.\\n\\nI agree. I repeat what I said: \\\"*All the proofs of appendix A.1 are a trivial consequence of manipulating the log of probability ratios, to end-up with a simple difference of logits. This is colloquially called the margin*\\\". This is the standard practice in robustness certification, or in training with the Hinge loss.\\n\\nI don't understand why the LROM is important in the first place, this measure looks very *ad-hoc*. No formal justification is given on *why* that should be the correct measure to compare \\\"compatible networks\\\". You said \\\"*is it possible to formally prove that one network consistently makes a correct decision every time the other network does, in the entire input region*\\\"; I agree this is an interesting question, but there is no discussion on why LROM is the correct measure to answer this question.\\n\\nIf the goal behind introducing LROM was just to obtain a difference of logits in the optimization problem of line 163, you just re-discovered the rational behind multi-class hinge loss. As I said, most papers in robustness certification use the margin, no need to introduce LROM as a contribution. You could simply start the paper by stating the optimization problem of line 163; that would be fine, skipping lines 99 to 160 entirely, that would look even more logical to me than introducing LROM. \\n \\n> We would appreciate it if the reviewer could elaborate on the question. we should highlight that very negative LROM does not necessarily imply that probabilities are near zero\\n\\nLROM is essentially the logarithm of a ratio of probabilities. If the log is negative, then the ratio is near zero, which means that the numerator is negligible compared to the denominator. I am questioning the rational behind LROM as a measure (as discussed before) on this corner case.\\n\\n> Let us highlight the motivation and use-case we discuss for practitioners in Lines 29-36 (first paragraph of Introduction): [...] \\n\\nI am not questioning the motivation of your work, as I recognize its importance in both in the \\\"*summary*\\\" and the \\\"*strengths*\\\" section of my review. However I am questioning your solution. See below:\\n\\n> Our approach answers this very fundamental question: \\u201cGiven these two compatible networks and an input region, is it possible to formally prove that one network consistently makes a correct decision every time the other network does, in the entire input region.\\u201d\\n\\nSorry, but I am simply not convinced that this ad-hoc measure, that you obtain with a straightforward extension of existing LP relaxations, actually answer the question in a satisfactory manner. \\n\\nI do not see much a progress in the discussion, therefore I'd like to keep my score.\"}", "{\"comment\": \">Thank you for your reply. However, my concerns on theorem, empirical study, and limited datasets have not been fully addressed by potential insights of LROM on robustness. I would like to keep my rating.\\n\\nWe thank the reviewer for their response. We would like to reiterate that, similar to the state of the art studies in the domain, we have used four datasets, MNIST, CIFAR-10, CHB-MIT, and MIT-BIH, out of which two are for real-world applications. We have considered numerous baselines for quantization, pruning, distillation, and VNNs, from top AI/ML conferences [1, 2]. Therefore, given the limited space, we believe the paper is sufficiently well supported,by a large number of experiments, including those presented in the supplementary material.\\n\\n[1] An abstract domain for certifying neural networks, Proceedings of the ACM on Programming Languages (PACMPL), 2019.\\n\\n[2] VNN: Verification-friendly neural networks with hard robustness guarantees. In the International Conference on Machine Learning (ICML), 2024.\\n\\nMoreover, we would like to point out certain claims made by the reviewer that are inaccurate:\\n\\n> In the experiments, \\\"We exclusively focus on correctly classified samples\\\" (Line 299). For me, a big application for ROM is to measure the uncertainty of predictions. If we have already know that the predictions is correct, there is no point to use this technique.\\n\\n\\nAs we discussed and previously highlighted, this is how the evaluation is conducted in the state-of-the-art works [1, 2].\\n\\nOn the other hand, the reviewer seems to not be accustomed to the adversarial examples and robust certification domains and does not differentiate between pure prediction performance and robustness.\\n\\nAs we previously mentioned, \\u201cAn input sample may be correctly classified, but small perturbations can lead to misclassification of the input sample.\\u201d\\n\\n[1] An abstract domain for certifying neural networks, Proceedings of the ACM on Programming Languages (PACMPL), 2019.\\n\\n[2] VNN: Verification-friendly neural networks with hard robustness guarantees. In the International Conference on Machine Learning (ICML), 2024.\\n\\n> Line 215: \\\"$-\\\\mathcal{R} \\\\geq \\\\mathcal{M}$\\\" -> \\\"$-\\\\mathcal{R} \\\\geq -\\\\mathcal{M}$\\\"\\n\\nPlease note the difference between $\\\\mathcal{R^{N1\\\\mid N2}}$ and $\\\\mathcal{R^{N2\\\\mid N1}}$. The theorem explains that $\\\\mathcal{R^{N1\\\\mid N2}} \\\\leq \\\\mathcal{M^{N1\\\\mid N2}}$ and $-\\\\mathcal{R^{N2\\\\mid N1}} \\\\geq \\\\mathcal{M^{N1\\\\mid N2}}$.\\n\\nThe reviewer has initially claimed that the theorem is not correct (which cannot be taken lightly) and we appreciate it if the reviewer could help us identify the issue; or adjust their score otherwise.\\n\\nWe thank the reviewer for their time. We'd appreciate knowing if our clarifications on contribution have satisfied you and if our clarified perspective might improve your evaluation of our work and your confidence in our work.\"}", "{\"comment\": \"Thank you for the prompt response!\\n\\n>The entire input regions is still characterized by a set of balls centered around the train set. Do you believe this brings better guarantees than just monitoring the certified robustness (using labels) over the train set?\\n\\nIndeed. The balls are centered around a test set, not the train set. This is well established when verifying robustness with formal guarantees on entire \\u201cballs\\u201d. This is not possible with testing or adversarial examples/training.\\n\\nAt the same time, in our experiments (Section 4.3), we consider two adversarially-trained networks and a non-defended network and show that our framework captures that adversarially-trained models have larger LROM than non-defended ones. This is while both networks have similar \\u201caccuracy\\u201d and \\u201ccertified accuracy\\\". \\n\\n>Sorry, but I believe this manipulation is rather straightforward.\\n\\nThe simplicity of the manipulation does not make it useless, nor does it warrant rejection. Formal verification of local robustness, a well-established area, often only looks at one single network (see for example [1] with 1000+ citations, or more recent work at ICML 2024 [2]). These studies are even simpler than our manipulation (because we consider and compare two networks), yet, it is commonly accepted because it shows robustness on balls centered on a test set. Here, the comparison is simple, but not as straightforward: we compare the ratios of probabilities of the correct/incorrect decisions in two networks. This allows us to determine whether a network is at least as robust as another. We are not aware of any work that considers this aspect.\\n\\n[1] AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation, S&P, 2018.\\n\\n[2] VNN: Verification-friendly neural networks with hard robustness guarantees. ICML, 2024.\\n\\n\\n>I don't understand why the LROM is important in the first place, this measure looks very ad-hoc.\\n\\nLROM is by no means ad-hoc. Let us reiterate the definition of LROM(N1,N2): min{OM(N1)-OM(N2)} on an entire neighborhood. In short, if LROM(N1,N2)>0, this means that OM(N1)>OM(N2). Therefore, if you trust N2, you can trust N1. Note that OM(N1) and OM(N2) exactly capture the actual predictions by network N1 and N2, respectively.\\n\\nTo make our claim concrete, suppose we could establish LROM(N1, N2) > 0 on a neighborhood around a sample \\u201cp\\u201d: \\n\\n* suppose a sample \\u201cs\\u201d in the neighborhood of \\u201cp\\u201d is classified incorrectly by both networks. We still have OM(N1) > OM(N2) for \\u201cs\\u201d, which means that N1 is closer to correctly classifying \\u201cs\\u201d than N2. So you can trust N1 if you did trust N2.\\n\\n* If a sample \\u201cs\\u201d in the neighborhood of \\u201cp\\u201d is classified correctly by N1, i.e., OM(N1)>0, and incorrectly by N2, i.e., OM(N2)<0, then again you can trust N1 if you did trust N2 as N1 is correct despite N2 making an incorrect prediction. \\n\\n* If a sample \\u201cs\\u201d in the neighborhood of \\u201cp\\u201d is classified incorrectly by N1 and correctly by N2, then N1 is doing worse than N2. Since N1 classifies the \\u201cs\\u201d incorrectly, then OM(N1)<0. Also, N2 classifies the sample \\u201cs\\u201d correctly implies OM(N2) > 0. As a result, the lower bound LROM(N1,N2) on the neighborhood has to be negative, which is excluded by our assumption LROM(N1,N2) > 0.\\n\\n* If sample \\u201cs\\u201d in the neighborhood is classified correctly by both networks with LROM(N1,N2)>0, then we know that the ratio of probabilities associated to the correct decision for \\u201cs\\u201d by N1 is larger than the ratio of probabilities associated by N2 for the same correct decision on \\u201cs\\u201d. Hence, you can trust N1. \\n\\nWe do not aim to show that N1 or N2 make the correct decisions on a neighborhood (i.e., have a constant label). That can be easily checked separately for N1 and N2 on the same neighborhoods using existing works. Our approach can show, on a whole neighborhood, that each time N2 (i.e., the reference network) makes the correct decision then N1 (the checked network) does also make the correct decision. \\n\\n>Sorry, but I am simply not convinced that this ad-hoc measure, that you obtain with a straightforward extension of existing LP relaxations, actually answer the question in a satisfactory manner.\\n\\nLROM is by no means ad-hoc, despite its simplicity. Let us reiterate the definition of LROM(N1,N2): min{OM(N1)-OM(N2)} on an entire neighborhood. Note that OM(N1) and OM(N2) exactly capture the actual predictions by network N1 and N2, respectively. If OM(N1)>0, the prediction is correct by N1. Similarly, If OM(N2)>0, the prediction is correct by N2. Therefore, if LROM(N1,N2)>0, this means that OM(N1)>OM(N2), on an entire neighborhood. Therefore, if you trust N2 (i.e., OM(N2)>0) and we know LROM(N1,N2)>0, then you can trust N1 (i.e., OM(N1)>OM(N2)>0). We are happy to include this in our paper.\\n\\nWe appreciate the reviewer\\u2019s time and would appreciate knowing if our clarifications on the contribution have addressed your concerns and influenced your evaluation of our work.\"}", "{\"comment\": \"Thank you for your additional clarifications\\n\\n> The simplicity of the manipulation does not make it useless, nor does it warrant rejection. \\n \\nIndeed. But it does not warrant acceptance either. You are wasting a full page on it without clear advantage for the reader.\\n\\n> These studies are even simpler than our manipulation (because we consider and compare two networks), yet, it is commonly accepted\\n\\nI don't believe acceptance and rejection in science should follow the same logic as \\\"case-law\\\" and \\\"legal precedence\\\". Especially when we factor in the novelty. Paper [1] is from 2018 and the novelty for that time makes it a stronger case than a submission in 2024. \\n\\n> We do not aim to show that N1 or N2 make the correct decisions on a neighborhood (i.e., have a constant label). **That can be easily checked separately for N1 and N2 on the same neighborhoods using existing works**. Our approach can show, on a whole neighborhood, that each time N2 (i.e., the reference network) makes the correct decision then N1 (the checked network) does also make the correct decision. \\n\\n(bold emphasis by me). \\n \\nI think you are really onto something with this last remark. It seems that this is a concern shared by other reviewers. Overall, it seems that we are likely unconvinced by LROM alone. If you argue that it \\\"*can be easily checked separately for N1 and N2 on the same neighborhoods using existing works*\\\" then I believe this is your responsibility to use these \\\"existing works\\\" and combine the results with LROM to *really check* that this additional information brings more insights than the raw certifiable robustness of N2 in the neighborhood of each train/test example.\\n\\nIf you perform these experiments in a further re-submission it could make a compelling case for your work.\"}", "{\"comment\": \"We thank the reviewer for their time and effort in reviewing our work, and we appreciate their valuable feedback.\", \"weaknesses\": \">This paper utilizes these tools, and the only novelty is to consider a joint optimization of the difference of logits for two networks, instead of a single one, which I consider a straightforward extension departing from existing methods.\\nTherefore the main contribution of the paper is introducing the OM, ROM and LROM measures, and using the whole Section 2 to deal with rather trivial considerations.\", \"our_approach_answers_a_very_fundamental_and_interesting_question\": \"\\u201cGiven two compatible networks and an input region, is it possible to formally prove that one network consistently makes a correct decision every time the other network does, in the entire input region.\\u201d This has not been done before.\\n\\n>All the proofs of appendix A.1 are a trivial consequence of manipulating the log of probability ratios, to end-up with a simple difference of logits. This is colloquially called the margin, not to be confused with the Output Margin (OM) measure that authors introduce, without clear motivation.\\n\\nWe would like to highlight that our main novelty and contribution is the *sound* analysis of the LROM for two neural networks. The manipulation referred to by the reviewer allows us to perform this analysis without any approximation (in the objective function) and in a formally sound fashion, which can be formulated as linear programming.\\n\\nWe use *output* margins, i.e. margin in the output domain, not to be confused with input margins or margin (e.g., as in SVMs). \\n\\n>In most papers for NN certification, this margin is analyzed, reported or even optimized (using Hinge loss) in a straightforward manner, without bothering highlighting the link with output probabilities. In this regard, the theoretical contribution appears rather shallow IMHO.\\n\\nOur paper aims at analysis of relative margins. Optimization of margins is outside the scope of this work.\", \"questions\": \">LROM is assymmetric, i.e inverting DNN N1 and N2 yields a different bound. Can you comment on this property? Is this something desirable? What if LROM(N1, N2) equals some value, and LROM(N2, N1) equals another, is there something to interpret here (qualitatively)? \\n\\nWe answer this question in Theorem 3.1, also mentioned in Lines 214-215 of the main paper. Essentially, because of the relaxation/approximations in formal methods, the *exact* value of LROM(N1, N2) is bounded from below by LROM(N1,N2), which is found by our approach, and bounded from above by -LROM(N2, N1). Therefore, not only can we provide a safe lower-bound on the exact LROM(N1, N2), but also a safe upper-bound.\\n\\n>When LROM is very negative (i.e. logits difference is huge) which implies that probabilities are near-zero, not too far away from machine precision zero, do you expect this value to carry meaningful information? \\n\\nWe would appreciate it if the reviewer could elaborate on the question. However, we should highlight that very negative LROM does not necessarily imply that probabilities are near zero. \\n\\n>Can you clarify use-cases in which LROM is useful for practioner?\\n\\nLet us highlight the motivation and use-case we discuss for practitioners in Lines 29-36 (first paragraph of Introduction): \\u201cIn the medical domain, for instance, neural networks can enable implantable and wearable devices to detect cardiac arrhythmia (Sopic et al., 2018a) or epileptic seizures (Baghersalimi et al., 2024) in real time. However, due to their limited computing resources, such devices often adopt the compact networks corresponding to the original medical-grade networks. It is vital for the compact network to reliably detect cardiac abnormalities/seizures, as lack of reliable decisions can jeopardize patients\\u2019 lives. Therefore, reasoning about the decisions made by the compact network w.r.t. to an original/reference network is vital for the safe deployment of the compact networks.\\u201d\", \"our_approach_answers_this_very_fundamental_question\": \"\\u201cGiven these two compatible networks and an input region, is it possible to formally prove that one network consistently makes a correct decision every time the other network does, in the entire input region.\\u201d\\n\\nWe thank the reviewer for their time. We'd appreciate knowing if our clarifications on contribution have satisfied you and if our clarified perspective might improve your evaluation of our work and your confidence in our work.\"}", "{\"comment\": \"We thank the reviewer again for their effort in reviewing our paper and their time. Given that we are approaching the end of the discussion period, we'd appreciate knowing if our responses have satisfied you and if our clarified perspective might improve your evaluation. Any further questions, comments, or suggestions for enhancing the paper are most welcome.\"}" ] }
0NAVeUm7sk
Variational Bayesian Pseudo-Coreset
[ "Hyungi Lee", "Seungyoo Lee", "Juho Lee" ]
The success of deep learning requires large datasets and extensive training, which can create significant computational challenges. To address these challenges, pseudo-coresets, small learnable datasets that mimic the entire data, have been proposed. Bayesian Neural Networks, which offer predictive uncertainty and probabilistic interpretation for deep neural networks, also face issues with large-scale datasets due to their high-dimensional parameter space. Prior works on Bayesian Pseudo-Coresets (BPC) attempt to reduce the computational load for computing weight posterior distribution by a small number of pseudo-coresets but suffer from memory inefficiency during BPC training and sub-optimal results. To overcome these limitations, we propose Variational Bayesian Pseudo-Coreset (VBPC), a novel approach that utilizes variational inference to efficiently approximate the posterior distribution, reducing memory usage and computational costs while improving performance across benchmark datasets.
[ "Bayesian Pseudo-Coreset", "Variational Inference" ]
Accept (Poster)
https://openreview.net/pdf?id=0NAVeUm7sk
https://openreview.net/forum?id=0NAVeUm7sk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vdtpJdsUST", "ujwOjfXwPZ", "uDXr7v4LEH", "pWJg5PJP1F", "p4WQ4Ougig", "lk1yTNKd7E", "lLEHWrPZ7U", "jDHyLt9e4q", "YmvelxK7CN", "WYyZL3qdOU", "QbjC1DA99v", "OGOULt3fAC", "Mx4YcuZ4ov", "MCZ8iwoLqG", "M5PXeia2AP", "LXy0cDndsp", "L8nLEBXLLf", "L1XCLsLzDo", "KCoWd4g9av", "GfBcrBwvkN", "F5kNggPuee", "F0CL7Zl9No", "E7xnW9VPbX", "CSMuGMIGoV", "CIB5GTyCvL", "Bx13eTavzl", "BabmzoVfId", "8AGLSp7BEh", "7aUY4cqg1E", "65P6VCCzrd", "64YcpP8xkZ", "2PlPsUDp2W", "0T6qJeokIt" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731909812556, 1732238933749, 1731909163867, 1732238915396, 1731909333324, 1732289753185, 1731909876054, 1732238924848, 1731909194337, 1731909050342, 1732289338258, 1731909504358, 1732238907377, 1737523802031, 1730537073641, 1731909377831, 1730391654203, 1731909586319, 1731909696167, 1732290692887, 1732264268999, 1732258301340, 1731907308929, 1734823705985, 1730537248491, 1732765440858, 1729436049135, 1732416015496, 1731909082487, 1731909248434, 1731907487595, 1731909747636, 1732290660446 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Reviewer_8mt5" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Reviewer_hhFq" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6920/Reviewer_8mt5" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Reviewer_sqcH" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Reviewer_sqcH" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Area_Chair_2R4d" ], [ "ICLR.cc/2025/Conference/Submission6920/Reviewer_hhFq" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Reviewer_oE39" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ], [ "ICLR.cc/2025/Conference/Submission6920/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q6] Generalization to Different Tasks like regression task**\\n\\nThank you for the constructive comment regarding extending our method to other tasks such as regression tasks. Indeed, our VBPC framework is versatile, as it can be adapted to various tasks by adjusting the dataset VI\\u2019s likelihood function.\\n\\nIn regression tasks, different likelihoods can be used; however, Gaussian likelihoods are especially prevalent, as they can enhance compatibility between pseudo-coreset VI and dataset VI, facilitating more effective learning of the pseudo-coreset. Even when using other likelihood functions, Gaussian likelihood can still be applied as an approximation during training. Additionally, if the likelihood has a closed-form solution, it would be feasible to align both the pseudo-coreset VI and dataset VI likelihoods to improve coherence and potentially achieve better performance.\\n\\nThis flexibility proves the potential of our method to generalize to a broader range of tasks, including regression. We will incorporate this discussion into the final manuscript to clarify how our approach can be extended to handle different types of tasks through appropriate likelihood selection.\\n\\n**[Q7] Robustness to Out-of-Distribution Data**\\n\\n Following your constructive suggestion, we have conducted additional Out-of-Distribution (OOD) detection experiments and reported the results. The metrics we evaluate include AUROC, AUPR-In, and AUPR-Out, where higher values indicate better performance. We used models trained with the CIFAR10 IPC 10 setting and evaluated them on CIFAR100, TinyImageNet, and SVHN datasets as OOD datasets.\\n\\nThe results, presented in Table R.11, demonstrate that the pseudo-coreset learned by VBPC performs robustly in OOD detection scenarios. These findings, combined with the corruption experiments in the main paper, validate the effectiveness and robustness of VBPC under diverse and challenging evaluation conditions.\\n\\n__Table R.11__ AUROC, AUPR-In, and AUPR-Out results for the OOD detection task with a model trained with the learned pseudo-coresets. Note that we used the same model structure which is utilized when training pseudo-coresets.\\n\\n| | |AUROC|AUPR-In|AUPR-Out|\\n|--------------|--------------|----------|------------|---------------|\\n|Dataset |Model | | | |\\n|--------------|--------------|----------|------------|---------------|\\n|CIFAR100|BPC-CD |49.84 | 49.74 | 50.13 |\\n| |BPC-fKL |51.21 | 49.61 | 51.72 |\\n| |BPC-rKL | 48.53 | 48.64 | 48.63 |\\n| |FBPC | 49.69 | 49.28 | 49.70 |\\n| |VBPC | **54.61** | **54.59** | **54.25** |\\n|-------------|--------------|----------|------------|---------------|\\n|TinyImage|BPC-CD |49.09 | 52.79 | 45.88 |\\n| |BPC-fKL |48.95 | 51.72 | 47.00 |\\n| |BPC-rKL | 48.34 | 52.71 | 44.49 |\\n| |FBPC | 45.39 | 49.70 | 43.14 |\\n| |VBPC | **52.85** | **56.22** | **49.64** |\\n|-------------|--------------|----------|------------|---------------|\\n|SVHN |BPC-CD |55.09 | 35.64 | 73.88 |\\n| |BPC-fKL |54.26 | 34.78 | 75.47 |\\n| |BPC-rKL | 42.61 | 28.29 | 67.15 |\\n| |FBPC | 41.34 | 30.12 | 62.18 |\\n| |VBPC | **68.50** | **48.49** | **82.91** |\"}", "{\"comment\": \"Thank you for your dedication and interest in our paper. As the author and reviewer discussion period approaches its end, we are curious to know your thoughts on our rebuttal and whether you have any additional questions.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q1] Compared to exisiting state-of-the-art classifier**\\n\\nThank you for the considerable effort and assistance you've put into reviewing our paper. However, it seems there may be a misunderstanding regarding this particular weakness you pointed out. The goal of our method is not to develop a state-of-the-art model for a specific dataset, nor is it to create an inference method that achieves higher performance by leveraging existing state-of-the-art models. Rather, our research focuses on effectively summarizing a large volume of training data into a minimal yet well-representative set of data points, thereby reducing the computational and memory burdens needed for learning.\\n\\nFor example, in the case of CIFAR10, we demonstrated that our method could achieve a strong accuracy of 55% using only 10 images (which is just 0.2% of the training data) instead of the original 60,000 images. Furthermore, the model and dataset setups we use are consistent with the benchmark configurations employed in various dataset distillation and Bayesian pseudo-coreset studies, such as [1,2,3,4,5,6]. These studies, for the sake of fair comparison, fix model architecture to certain layer sizes and kernel sizes, which, of course, results in models that may not match the performance of SOTA models. However, our VBPC method could indeed be practically utilized in conjunction with SOTA models to learn a pseudo-coreset.\\n\\nNotably, our method requires only the last layer for variational inference, making it significantly easier to apply to large models such as ViTs compared to existing Bayesian pseudo-coreset methods. A major drawback of previous BPC methods is that they require a pre-trained target model (e.g., ViT) along with a large number of expert trajectories obtained by training the ViT model multiple times with different random seeds. In contrast, our method does not require pre-training multiple ViT models, making it a much more efficient approach to pseudo-coreset learning.\"}", "{\"comment\": \"Thank you for your dedication and interest in our paper. As the author and reviewer discussion period approaches its end, we are curious to know your thoughts on our rebuttal and whether you have any additional questions.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[W1] Large Dataset and Continual Learning**\\n\\nThanks for the constructive comment. Following your suggestion, we aim to further demonstrate the effectiveness of our VBPC method by showing that it not only achieves good performance on larger-scale datasets, which other BPC methods even struggle to train for the large ipc but also outperforms other BPC baselines in a continual learning setting. By doing so, we hope to highlight VBPC's ability to handle tasks that pose challenges for other BPC baselines, proving its versatility and effectiveness.\\n\\nFirst, to show that our method is uniquely scalable to large datasets compared to other BPC methods, we conducted additional experiments on the ImageNetWoof (128x128x3) dataset [1] and the ImageNet1k (64x64x3) dataset [2]. Additionally, we included an experiment in a continual learning scenario to validate that our method performs better in practical scenarios.\\n\\nWe conducted experiments on the ImageWoof (128x128x3) dataset with ipc 1 and ipc 10 settings, as well as the resized ImageNet1k (64x64x3) dataset with ipc 1 and ipc 2 settings, to demonstrate the scalability of our method to high-resolution images and larger datasets. Unlike existing BPC baselines, which encountered memory issues and failed to train due to out-of-memory errors on an RTX 3090 GPU as the image resolution and number of classes increased, our method successfully completed training. Table R.5 clearly shows that VBPC significantly outperforms other baselines with a large margin for both the ImageWoof and resized ImageNet1k datasets.\\n\\nNext, we validated the practical effectiveness of our method through continual learning experiments using pseudo-coreset images learned by each method. We followed the continual learning setup described in [3,4], where class-balanced training examples are greedily stored in memory, and the model is trained from scratch using only the latest memory. Specifically, we performed a 5-step class incremental learning experiment on CIFAR100 with an ipc 20 setting, following the class splits proposed in [3,4]. Table R.6 demonstrates that VBPC consistently outperforms other baselines across all steps, confirming its superior practicality and effectiveness in real-world continual learning scenarios.\\n\\n__Table R.5.__ Experiments on the scalability utilizing ImageWoof and resized ImageNet datasets. Here \\u2018-\\u2019 indicates the training fails due to the out-of-memory problems.\\n| Metric | ImageWoof |ipc 1 | ImageWoof| ipc 10 | ImageNet| ipc 1 | ImageNet |ipc 2 |\\n|:- |:- |:- |:- |:- |:- |:- |:- |:- |\\n| | ACC | NLL | ACC | NLL | ACC | NLL | ACC | NLL |\\n|:- |:- |:- |:- |:- |:- |:- |:- |:- |\\n| Random | 14.2 \\u00b1 0.9 | 3.84 \\u00b1 0.25 | 27.0 \\u00b1 1.9 | 2.83 \\u00b1 0.33 | 1.1 \\u00b1 0.1 | 8.32 \\u00b1 0.05 | 1.4 \\u00b1 0.1 | 8.10 \\u00b1 0.05 |\\n| BPC-CD | 18.5 \\u00b1 0.1 | 2.76 \\u00b1 0.05 | - | - | - | - | - | - |\\n| FBPC | 14.8 \\u00b1 0.1 | 3.73 \\u00b1 0.02 | 28.1 \\u00b1 0.3 | 2.69 \\u00b1 0.09 | - | - | - | - |\\n| BPC-fKL | 14.9 \\u00b1 0.9 | 3.74 \\u00b1 0.23 | 25.0 \\u00b1 0.8 | 2.90 \\u00b1 0.27 | - | - | - | - |\\n| BPC-rkL | 12.0 \\u00b1 0.5 | 6.07 \\u00b1 0.31 | - | - | - | - | - | - |\\n| VBPC | **31.2 \\u00b1 0.1** | **2.13 \\u00b1 0.04** | **39.0 \\u00b1 0.1** | **1.84 \\u00b1 0.1** | **10.0 \\u00b1 0.1** | **5.33 \\u00b1 0.04** | **11.5 \\u00b1 0.2** | **5.25 \\u00b1 0.05** |\\n\\n\\n__Table R.6.__ Experiments on the continual learning setting. Here, we utilize the CIFAR100 dataset with ipc 20 setting. We assume 5 steps during training and each step contains data from new 20 classes in the CIFAR100 dataset. Here we only report accuracy due to the variant of the number of classes during the steps.\\n| Number of Classes | 20 | 40 | 60 | 80 | 100 |\\n|---------------------------|---------------|---------------------|-------------------|-------------------|----------------|\\n|BPC-CD | 52.5 \\u00b1 2.4 | 40.4 \\u00b1 1.3 | 35.2 \\u00b1 0.8 | 33.4 \\u00b1 0.5 | 29.4 \\u00b1 0.2 |\\n|FBPC | 61.4 \\u00b1 1.8 | 53.2 \\u00b1 1.5 | 48.8 \\u00b1 0.7 | 43.9 \\u00b1 0.4 | 41.2 \\u00b1 0.3 |\\n|BPC-fKL | 51.8 \\u00b1 2.2 | 39.8 \\u00b1 1.1 | 35.5 \\u00b1 0. 7| 33.1 \\u00b1 0.5 | 29.5 \\u00b1 0.3 |\\n|BPC-rKL | 48.2 \\u00b1 2.7 | 35.5 \\u00b1 1.8 | 32.0 \\u00b1 1.0 | 29.8 \\u00b1 0.6 | 25.5 \\u00b1 0.3 |\\n|VBPC | 75.3 \\u00b1 2.0 | 65.8\\u00b1 1.5 | 57.1 \\u00b1 0.9 | 53.3 \\u00b1 0.5 | 50.3 \\u00b1 0.2 |\"}", "{\"comment\": \"Thank you for your thoughtful and detailed response. Your explanation has basically resolved most of the issues I was concerned about and provided clear guidance on how to address them. I will raise your score.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q8] Effect of Memory Efficient Loss Computation**\\n\\nThank you for raising the question about how our memory-efficient computation impacts learning. In fact, our memory-efficient loss computation and variational inference are mathematically equivalent to their non-memory-efficient counterparts. Through mathematical reformulations, we reduce operations like matrix inversion and multiplication into computations involving smaller matrices, resulting in the same theoretical outcomes.\\n\\nConsequently, these memory-efficient approaches yield identical learning results and performance in theory. Practically, while numerical errors can arise during computations, our method mitigates this by operating on smaller-scale matrices, which are less prone to significant numerical errors. This ensures that our approach not only reduces memory usage but also maintains robust and accurate computations, leading to reliable results in practice.\\n\\n**References**\\n\\n[1] Jeremy Howard. A smaller subset of 10 easily classified classes from imagenet, and a little more french, 2020. URL https://github.com/fastai/imagenette/\\n\\n[2] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211\\u2013252, 2015.\\n\\n[3] Yongchao Zhou, Ehsan Nezhadarya, and Jimmy Ba. Dataset distillation using neural feature regression. In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022.\\n\\n[4] Bo Zhao and Hakan Bilen. Dataset condensation with distribution matching. CoRR, abs/2110.04181, 2021. URL https://arxiv.org/abs/2110.04181.\\n\\n[5] J. Harrison, J. Willes, and J. Snoek. Variational Bayesian last layers. In International Conference on Learning Representations (ICLR), 2024a.\"}", "{\"comment\": \"Thank you for your dedication and interest in our paper. As the author and reviewer discussion period approaches its end, we are curious to know your thoughts on our rebuttal and whether you have any additional questions.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q2] Convincing experiments**\\n\\nAlthough we conducted extensive additional ablation experiments alongside various tests commonly performed in BPC studies to demonstrate the effectiveness and efficiency of our VBPC approach (e.g., in terms of BMA performance, out-of-distribution performance, and model generalization), to further show that VBPC works for various scenarios which other BPC baselines do not work well, we have also performed two more experiments based on the reviewer's request.\\n\\nFirst, to show that our method is uniquely scalable to large datasets compared to other BPC methods, we conducted additional experiments on the ImageNetWoof (128x128x3) dataset and the ImageNet1k (64x64x3) dataset. Additionally, we included an experiment in a continual learning scenario to validate that our method performs better in practical scenarios.\\n\\nWe conducted experiments on the ImageWoof (128x128x3) dataset with ipc 1 and ipc 10 settings, as well as the resized ImageNet1k (64x64x3) dataset with ipc 1 and ipc 2 settings, to demonstrate the scalability of our method to high-resolution images and larger datasets. Unlike existing BPC baselines, which encountered memory issues and failed to train due to out-of-memory errors on an RTX 3090 GPU as the image resolution and number of classes increased, our method successfully completed training. Table R.3 clearly shows that VBPC significantly outperforms other baselines with a large margin for both the ImageWoof and resized ImageNet1k datasets.\\n\\nNext, we validated the practical effectiveness of our method through continual learning experiments using pseudo-coreset images learned by each method. We followed the continual learning setup described in [4,7], where class-balanced training examples are greedily stored in memory, and the model is trained from scratch using only the latest memory. Specifically, we performed a 5-step class incremental learning experiment on CIFAR100 with an ipc 20 setting, following the class splits proposed in [4,7]. Table R.4 demonstrates that VBPC consistently outperforms other baselines across all steps, confirming its superior practicality and effectiveness in real-world continual learning scenarios.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q2] The current experiments may not adequately showcase the strengths of VBPC in relevant scenarios. Conducting additional experiments in setting where VBPC\\u2019s efficiency gains could be more convincingly demonstrated.**\\n\\nAlthough we conducted extensive additional ablation experiments alongside various tests commonly performed in BPC studies to demonstrate the effectiveness and efficiency of our VBPC approach (e.g., in terms of BMA performance, out-of-distribution performance, and model generalization), to further show that VBPC works for various scenarios which other BPC baselines do not work well, we have also performed two more experiments based on the reviewer's request.\\n\\nFirst, to show that our method is uniquely scalable to large datasets compared to other BPC methods, we conducted additional experiments on the ImageNetWoof (128x128x3) dataset [8] and the resized ImageNet1k (64x64x3) dataset [9]. Additionally, we included an experiment in a continual learning scenario to validate that our method performs better in practical scenarios.\\n\\nWe conducted experiments on the ImageWoof (128x128x3) dataset with ipc 1 and ipc 10 settings, as well as the resized ImageNet1k (64x64x3) dataset with ipc 1 and ipc 2 settings, to demonstrate the scalability of our method to high-resolution images and larger datasets. Unlike existing BPC baselines, which encountered memory issues and failed to train due to out-of-memory errors on an RTX 3090 GPU as the image resolution and number of classes increased, our method successfully completed training. Table R.1 clearly shows that VBPC significantly outperforms other baselines with a large margin for both the ImageWoof and resized ImageNet1k datasets.\\n\\nNext, we validated the practical effectiveness of our method through continual learning experiments using pseudo-coreset images learned by each method. We followed the continual learning setup described in [4,7], where class-balanced training examples are greedily stored in memory, and the model is trained from scratch using only the latest memory. Specifically, we performed a 5-step class incremental learning experiment on CIFAR100 with an ipc 20 setting, following the class splits proposed in [4,7]. Table R.2 demonstrates that VBPC consistently outperforms other baselines across all steps, confirming its superior practicality and effectiveness in real-world continual learning scenarios.\"}", "{\"comment\": \"Thank you for your detailed response. You have addressed my concerns thoroughly. I will raise your score.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q4] What is the take-home message of Figure1?**\\n\\nThank you for asking about the interpretability of the VBPC images. Regarding the learned pseudo-coreset images for CIFAR10, the results can be found in Figure 1 of the main paper and Figure 12 in the appendix, showing the outcomes for ipc values of 1 and 10. These images reveal several interesting aspects of how VBPC captures information.\\n\\nFirst, both ipc 1 and ipc 10 images show that VBPC effectively learns features associated with specific classes, such as \\\"horse\\\" or \\\"automobile,\\\" as can be visually confirmed. This indicates that the pseudo-coreset images retain class-relevant information necessary for approximating the original dataset\\u2019s posterior distribution. When comparing ipc 1 and ipc 10, there are notable differences. In the case of ipc 1, where only a single image per class is available, VBPC attempts to encode as many class-specific features as possible into a single image. As a result, the learned image appears to incorporate multiple discriminative features from the class symmetrically. In contrast, with ipc 10, where more images per class are available, VBPC distributes the class-relevant features across multiple images. This leads to a greater diversity of features being captured across the pseudo-coreset, enabling a more comprehensive representation of the class.\\n\\nAdditionally, both ipc 1 and ipc 10 images often include low-level features beyond the main class-relevant ones. These features likely help capture the dataset's variability and ensure the learned pseudo-coreset maintains a close approximation of the original data distribution. \\n\\nThese observations suggest that VBPC is effective in compressing the dataset while retaining essential information. The learned images illustrate how VBPC balances feature extraction and information retention to ensure that the variational posterior distribution learned using the pseudo-coreset closely approximates the one learned using the full dataset. This further validates the interpretability and utility of VBPC in various tasks.\\n\\n**References**\\n\\n[1] Jeremy Howard. A smaller subset of 10 easily classified classes from imagenet, and a little more french, 2020. URL https://github.com/fastai/imagenette/\\n\\n[2] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211\\u2013252, 2015.\\n\\n[3] Yongchao Zhou, Ehsan Nezhadarya, and Jimmy Ba. Dataset distillation using neural feature regression. In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022.\\n\\n[4] Bo Zhao and Hakan Bilen. Dataset condensation with distribution matching. CoRR, abs/2110.04181, 2021. URL https://arxiv.org/abs/2110.04181.\"}", "{\"comment\": \"Thank you for your dedication and interest in our paper. As the author and reviewer discussion period approaches its end, we are curious to know your thoughts on our rebuttal and whether you have any additional questions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes Variational Bayesian Pseudo-Coreset (VBPC), a novel approach to efficiently approximate posterior distribution in Bayesian Neural Networks (BNNs). Bayesian Neural Networks often face issues with largescale datasets due to their high-dimensional parameter space. To reduce the computational load, many Bayesian Pseudo-Coreset (BPC) methods have been proposed, but they suffer from memory inefficiencies. VBPC addresses these limitations by using variational inference (VI) to approximate the posterior distribution. Moreover, this paper provides a memory-efficient method to approximate the predictive distribution with only a single forward pass instead of multiple forwards, making the approach computationally and memory-efficient.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper leverages the variational formulation to obtain the closed-form posterior distribution of the last layer weights, which resolves the issue of suboptimal performance seen in previous approaches.\", \"And, the method approximates the predictive distribution with only a single forward pass instead of multiple forwards, making the approach computationally and memory-efficient.\"], \"weaknesses\": [\"The experiment is not enough to illustrate the function of the algorithm.\"], \"questions\": [\"The accuracy of these algorithms on cifar10, cifar100 and Tiny-Imagenet is too low. VBPC is effective relative to several existing BPC baselines, but the performance is significantly lower compared to existing state-of-the-art classifiers.\", \"In the field of image classification, at least the scenes you choose for classification, these experiments do not seem to show the advantages of your method convincingly.\", \"Please try to provide some new experiments, in more convincing scenarios, to illustrate the practical application value of your method.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[W2, Q1] Would Laplace approximation on the softmax likelihood be a better option than first choosing a variational inference scheme and then using a Gaussian likelihood?**\\n\\nThank you for asking such an insightful and constructive question. First, we place great importance on considering potential future directions to improve our approach. Here, we will discuss some concerns and challenges we foresee in adopting the reviewer\\u2019s suggestion.\\n\\nSpecifically, if we switch from using a Gaussian likelihood to employing a softmax likelihood with Laplace approximation for variational inference, there are two cases to consider: (1) using Laplace approximation on the last-layer weights without any updates, and (2) updating the last-layer weights with some gradient descent steps before applying Laplace approximation.\\n\\nIn the first case\\u2014applying Laplace approximation to weights without updating the last layer\\u2014two main issues may arise. First, the Laplace approximation assumes that the weights are near a minimum, allowing for the approximation of the first-order term in Taylor expansion as zero. However, this assumption may not hold for untrained weights, leading to significant approximation error. Additionally, the computational burden of calculating the Hessian for Laplace approximation is substantial, and the need to compute gradients through this Hessian during pseudo-coreset updates increases the computational load further.\\n\\nIn the second case\\u2014updating the last layer weights with gradient steps before applying Laplace approximation\\u2014there\\u2019s the advantage of reducing Taylor expansion error. However, this approach involves a large computational graph, which can be problematic due to the computational expense typical in bilevel optimization settings. Additionally, the need to compute gradients through the Hessian remains a challenge.\\n\\nOverall, we believe that solving these issues could lead to new meaningful future work for VBPC.\\n\\n**[Q2] Is the claim of Manousakas et al. 2020 contrary to the main message of the paper?**\\n\\nThank you for your comment. If we understand correctly, you\\u2019re referring to the paper *Bayesian Pseudocoresets* by Manousakas et al. (2020). We\\u2019re not entirely certain about which aspect of this work might be contrary to our approach and goals. Could you clarify which specific details or results from this paper you find relevant? Any additional insights on this point would help us provide a more precise and thorough response.\\n\\n**[Q3] Do we really lack an analytical solution or at least an EM-like algorithm where the E-step has an analytical solution when only the last layer of a neural net is probabilistic?**\\n\\nThank you for the suggestion to further improve our VBPC method. While leveraging the EM algorithm is an intriguing idea, there are still practical challenges associated with its application. \\n\\nPrimarily, the E-step and M-step of the EM algorithm lack closed-form solutions in this context. Even if we were to derive an approximation for one step to compute it in closed form, the other step would still require iterative computation. This sequential nature of the EM algorithm would lead to the accumulation of the computational graph during iterations, resulting in a memory-inefficient process when updating the pseudo-coreset. \\n\\nDespite these challenges, we agree that exploring efficient approximations to address these issues could significantly enhance the utility of VBPC. Investigating such improvements represents a **highly valuable future research direction**. We will include this discussion in the revised manuscript to acknowledge the potential of the EM algorithm as a future direction.\"}", "{\"summary\": \"The paper studies the problem of core set extraction using Bayesian inference. The proposed solution builds on a two-stage variational inference scheme where the first stage is responsible for inferring the optimal core set while the second is to fit this core set to the full-scale data set at hand. The developed solution overarches the whole family of the distributions that belong to the exponential family.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is particularly well-written with a clearly defined problem scope and a solid solution methodology that follows a well-justified sequence of development steps,\", \"The proposed bilevel variational inference formulation is neat and sensible.\", \"The computational complexity analysis is indeed helpful to see the merit of the devised solution.\", \"The reported results are strong on the chosen group of data sets.\"], \"weaknesses\": [\"The paper motivates the core set extraction problem with use cases such as processing big data and addressing continual learning setups. However, the presented results are on data sets that can be considered in the present technological landscape as toy problems. I do symphathize the idea of prototyping. But given the strong applied component of the present work, I am still unconvinced about the generalizability of the observed scores to a case where coreset extraction is an actual necessity. The issue may be addressed during the rebuttal by showing results on a large enough data set used as a standard coreset extraction benchmark or a continual learning application.\", \"The need to use the Gaussian likelihood to avoid the need for an approximation stage is only partially convincing. It is an artiffact of choosing variational Bayes as the inference scheme, which is exogeneous to the problem anyway. Maybe this issue, linked to my first question below, will be clarified during the rebuttal.\"], \"questions\": [\"Would Laplace approximation on the softmax likelihood be a better option than first choosing a variational inference scheme and then using a Gaussian likelihood? Laplace proves to be a powerful approach in Gaussian process classification.\", \"Is the claim of Manousakas et al. 2020 contrary to the main message of the paper? If correct, would this not undermine the significance of the proposed solution? If incorrect, why?\", \"Do we really lack an analytical solution or at least an EM-like algorithm where the E-step has an analytical solution when only the last layer of a neural net is probabilistic?\", \"What is the take-home message of Figure 1? I am missing to see a particular pattern there that helps motivate the proposed solution.\", \"My initial score is a borderline as the paper has both certain merits and clear question marks. I am happy to consider significant score updates based on a convincing rebuttal discussion.\", \"---\"], \"post_rebuttal\": \"The authors gave convincing answers to the above questions and they published additional results that demonstrate the advantages of the proposed method more clearly than the experiments reported in the original submission. I update my score to an accept.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q1] Scalability to Larger Datasets**\\n\\nThanks for the constructive comment. Following your suggestion, we aim to further demonstrate the effectiveness of our VBPC method by showing that it not only achieves good performance on larger-scale datasets, which other BPC methods even struggle to train for the large ipc but also outperforms other BPC baselines in a continual learning setting. By doing so, we hope to highlight VBPC's ability to handle tasks that pose challenges for other BPC baselines, proving its versatility and effectiveness.\\n\\nFirst, to show that our method is uniquely scalable to large datasets compared to other BPC methods, we conducted additional experiments on the ImageNetWoof (128x128x3) dataset [1] and the resized ImageNet1k (64x64x3) dataset [2]. Additionally, we included an experiment in a continual learning scenario to validate that our method performs better in practical scenarios.\\n\\nWe conducted experiments on the ImageWoof (128x128x3) dataset with ipc 1 and ipc 10 settings, as well as the resized ImageNet1k (64x64x3) dataset with ipc 1 and ipc 2 settings, to demonstrate the scalability of our method to high-resolution images and larger datasets. Unlike existing BPC baselines, which encountered memory issues and failed to train due to out-of-memory errors on an RTX 3090 GPU as the image resolution and number of classes increased, our method successfully completed training. Table R.7 clearly shows that VBPC significantly outperforms other baselines with a large margin for both the ImageWoof and resized ImageNet1k datasets.\\n\\nNext, we validated the practical effectiveness of our method through continual learning experiments using pseudo-coreset images learned by each method. We followed the continual learning setup described in [3,4], where class-balanced training examples are greedily stored in memory, and the model is trained from scratch using only the latest memory. Specifically, we performed a 5-step class incremental learning experiment on CIFAR100 with an ipc 20 setting, following the class splits proposed in [3,4]. Table R.8 demonstrates that VBPC consistently outperforms other baselines across all steps, confirming its superior practicality and effectiveness in real-world continual learning scenarios.\\n\\n__Table R.7.__ Experiments on the scalability utilizing ImageWoof and resized ImageNet datasets. Here \\u2018-\\u2019 indicates the training fails due to the out-of-memory problems.\\n| Metric | ImageWoof |ipc 1 | ImageWoof| ipc 10 | ImageNet| ipc 1 | ImageNet |ipc 2 |\\n|:- |:- |:- |:- |:- |:- |:- |:- |:- |\\n| | ACC | NLL | ACC | NLL | ACC | NLL | ACC | NLL |\\n|:- |:- |:- |:- |:- |:- |:- |:- |:- |\\n| Random | 14.2 \\u00b1 0.9 | 3.84 \\u00b1 0.25 | 27.0 \\u00b1 1.9 | 2.83 \\u00b1 0.33 | 1.1 \\u00b1 0.1 | 8.32 \\u00b1 0.05 | 1.4 \\u00b1 0.1 | 8.10 \\u00b1 0.05 |\\n| BPC-CD | 18.5 \\u00b1 0.1 | 2.76 \\u00b1 0.05 | - | - | - | - | - | - |\\n| FBPC | 14.8 \\u00b1 0.1 | 3.73 \\u00b1 0.02 | 28.1 \\u00b1 0.3 | 2.69 \\u00b1 0.09 | - | - | - | - |\\n| BPC-fKL | 14.9 \\u00b1 0.9 | 3.74 \\u00b1 0.23 | 25.0 \\u00b1 0.8 | 2.90 \\u00b1 0.27 | - | - | - | - |\\n| BPC-rkL | 12.0 \\u00b1 0.5 | 6.07 \\u00b1 0.31 | - | - | - | - | - | - |\\n| VBPC | **31.2 \\u00b1 0.1** | **2.13 \\u00b1 0.04** | **39.0 \\u00b1 0.1** | **1.84 \\u00b1 0.1** | **10.0 \\u00b1 0.1** | **5.33 \\u00b1 0.04** | **11.5 \\u00b1 0.2** | **5.25 \\u00b1 0.05** |\\n\\n\\n__Table R.8.__ Experiments on the continual learning setting. Here, we utilize the CIFAR100 dataset with ipc 20 setting. We assume 5 steps during training and each step contains data from new 20 classes in the CIFAR100 dataset. Here we only report accuracy due to the variant of the number of classes during the steps.\\n| Number of Classes | 20 | 40 | 60 | 80 | 100 |\\n|---------------------------|---------------|---------------------|-------------------|-------------------|----------------|\\n|BPC-CD | 52.5 \\u00b1 2.4 | 40.4 \\u00b1 1.3 | 35.2 \\u00b1 0.8 | 33.4 \\u00b1 0.5 | 29.4 \\u00b1 0.2 |\\n|FBPC | 61.4 \\u00b1 1.8 | 53.2 \\u00b1 1.5 | 48.8 \\u00b1 0.7 | 43.9 \\u00b1 0.4 | 41.2 \\u00b1 0.3 |\\n|BPC-fKL | 51.8 \\u00b1 2.2 | 39.8 \\u00b1 1.1 | 35.5 \\u00b1 0. 7| 33.1 \\u00b1 0.5 | 29.5 \\u00b1 0.3 |\\n|BPC-rKL | 48.2 \\u00b1 2.7 | 35.5 \\u00b1 1.8 | 32.0 \\u00b1 1.0 | 29.8 \\u00b1 0.6 | 25.5 \\u00b1 0.3 |\\n|VBPC | 75.3 \\u00b1 2.0 | 65.8\\u00b1 1.5 | 57.1 \\u00b1 0.9 | 53.3 \\u00b1 0.5 | 50.3 \\u00b1 0.2 |\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q2] Hyperparameter Sensitivity**\\n\\nThank you for the constructive comment. We appreciate your suggestion that analyzing the impact of hyperparameter selection on performance could provide deeper insights into our method. We fully agree with this point and have provided an extensive ablation study in Appendix E, where we detail the impact of various hyperparameters on model performance and the learned pseudo-coreset.\\n\\nRegarding hyperparameter sensitivity, our experiments show that while parameters like random initialization of the pseudo-coreset, $\\\\gamma$, and $\\\\rho$ exhibit minimal impact on performance, the most significant factor influencing the results is whether the pseudo-coreset labels are learned. As shown in Figure 6 and Table 11, not learning the labels results in substantially degraded performance compared to the full VBPC method. As analyzed in Appendix E.5, this can be attributed to the pseudo-coreset variational distribution\\u2019s mean being dependent on the labels, which directly affects the computation of the dataset VI problem. This indicates that learning the labels plays a critical role in our method's success.\\n\\nFurthermore, we observed a clear trend in the number of pseudo-coreset points on performance. As shown in our experiments for ipc = 1, 10, 50, increasing the number of pseudo-coreset points enhances performance. And we can think that the number approaches that of the full dataset, and the performance converges to that of using the entire dataset. These findings validate the trade-offs between memory/computation efficiency and model performace in our method.\\n\\n**[Q3] Computational Costs and Training Time**\\n\\nThank you for highlighting the importance of our contribution regarding efficient Bayesian inference with respect to computational cost. To address your suggestion, we performed analyses focusing on two aspects of computational cost:\\n\\n1. Cost of Training the pseudo-coreset: \\n\\nAs mentioned in the paper, conventional BPC methods relying on SGMCMC require the creation of expert trajectories, which are training trajectories derived from the full dataset. Each dataset typically involves training with 10 different random seeds for these trajectories, making this step computationally expensive. Since all BPC baselines share and utilize these precomputed trajectories, their associated computational cost can be considered a shared overhead. \\n\\nTo isolate the computational cost of training the pseudo-coreset itself, we measured the wall-clock time required for pseudo-coreset optimization by each method. The results of this comparison are summarized in Table R.9, providing insights into how VBPC reduces training costs compared to other baselines.\\n\\n2. Cost of Inference: \\n\\nWhen performing inference, VBPC requires training only a single model, whereas other BPC baselines rely on multiple SGMCMC samples. Each sample incurs significant training and inference costs, which grow linearly with the number of samples. \\n\\nTo quantify this difference, we measured the wall-clock time for inference across methods, with results presented in Table R.10. These results highlight how VBPC achieves superior efficiency during inference by avoiding the high computational costs associated with sampling-based approaches.\\n\\nThese analyses demonstrate VBPC\\u2019s ability to perform Bayesian inference efficiently, both in terms of pseudo-coreset training and inference, and further reinforce the computational advantages of our method.\\n\\n__Table.R.9__ Wall clock time results for training pseudo-coresets with each BPC method using CIFAR10 ipc 10 settings. We used RTX3090 GPU to measure the exact training time. Here, all methods except for VBPC share the training time for expert trajectories.\\n\\n|Method|BPC-CD|BPC-rKL|FBPC|BPC-fKL|VBPC|\\n|----------|-----------|-------------|--------|-----------|--------|\\n|Times (hr)| 5 + 8.5 | 5 + 9 | 5 + 10.5 | 5 + 12 | 5.5| \\n\\n__Table.R.10__ Wall clock time results for inference using learned pseudo-coresets. We measure the inference time for evaluating all the test data from the CIFAR10 test dataset. After finishing training the pseudo-coresets, the inference cost for the all baselines are same because they only need SGMCMC and BMA with same number of datasets and weight samples.\\n\\n|Method|BPC-CD|BPC-rKL|FBPC|BPC-fKL|VBPC|\\n|----------|------------|-----------|--------|------------|--------|\\n|Time (s)| 165 | 165 | 165 | 165 | 20 |\"}", "{\"comment\": \"Thank you for the positive review of our paper. We will organize the experiments and discussions conducted during the discussion period and incorporate them into the final manuscript.\"}", "{\"comment\": \"Thank you for your positive response! We will incorporate the discussions and experimental results into the final manuscript.\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for your detailed answer. This satisfies all my major concerns, in particular the scalability of the method to nontrivial tasks, which you successfully demonstrate as well as the clarification of the approximation procedure. I raise my score to an accept.\"}", "{\"title\": \"General response\", \"comment\": \"We sincerely thank all reviewers for their thoughtful and detailed feedback on our work. We are pleased that the reviewers acknowledged the strengths of our approach, including its computational and memory efficiency enabled by approximating the predictive distribution in a single forward pass (hhFq, 8mt5). We are also grateful that the well-structured problem formulation and methodological rigor, particularly the bilevel variational inference formulation and computational complexity analysis, were recognized as key merits of our paper (sqcH). Furthermore, we appreciate the recognition of the practical contributions of VBPC, notably its potential to enhance the scalability of Bayesian Neural Networks through variational inference and pseudo-coresets while addressing challenges of prior BPC methods (oE39).\\nAnd also, we thank the reviewers for their encouraging remarks regarding the clarity, robustness, and potential impact of our proposed method. Their insights provide valuable validation of our contributions and motivate us to continue improving the manuscript.\"}", "{\"metareview\": \"The authors propose a solution to the coreset problem that uses a two-stage variational inference approach. Four authors reviewed the paper and three recommended accept after the rebuttal period, with one borderline reject. They felt the paper was well-written and motivated, technically well developed and original, and contained a compelling set of results on good data sets. The authors provided a convincing additional set of results in the rebuttal. Space permitting, these may be incorporated in the final draft, but the paper is strong enough as is and should be accepted to the conference.\", \"additional_comments_on_reviewer_discussion\": \"The authors raised valid concerns about review authenticity from two reviewers who submitted extremely similar reviews. While AI was suggested, it is notable to me that these two reviewers are both PhD students at the same university and so may have simply been working together on their reviews, which of course is still inappropriate. However, that then raises the question why they thought it wouldn't be noticed, and so AI may have been used. I've discounted their opinions, but recognize that the authors provided a thorough response to all reviewers and were able to address the concerns of all reviewers in their (and my) opinions. I have also read the paper and think it is technically interesting and well developed, with good and detailed empirical evaluation. I think it would make a good contribution to the conference and recommend acceptance.\"}", "{\"summary\": \"The paper presents the Variational Bayesian Pseudo-Coreset (VBPC) method, aimed at efficiently approximating the posterior distribution in Bayesian Neural Networks (BNNs). Given that BNNs face substantial computational challenges when dealing with large datasets due to their high-dimensional parameter spaces, VBPC provides a promising method. Traditional Bayesian Pseudo-Coreset (BPC) techniques have been proposed to alleviate these issues, yet they often struggle with memory inefficiencies. VBPC addresses this by leveraging variational inference (VI) to approximate the posterior distribution. This method achieves a memory-efficient approximation of the predictive distribution using only a single forward pass, which makes it appealing for computationally intensive applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper effectively utilizes variational inference to derive a closed-form posterior distribution for the weights of the last layer, thereby addressing some of the performance limitations observed in prior BPC approaches.\\nVBPC\\u2019s capability to approximate the predictive distribution in a single forward pass enhances both computational and memory efficiency, positioning it as a potentially valuable method for large-scale applications.\", \"weaknesses\": \"The experimental validation on practical application is limited.\", \"questions\": [\"While VBPC demonstrates improvement over some BPC baselines, its classification accuracy on benchmark datasets like CIFAR-10, CIFAR-100, and Tiny-ImageNet remains notably lower than that of state-of-the-art classifiers. This raises concerns about the practical competitiveness of VBPC in real-world applications, particularly in image classification tasks where accuracy is crucial. Could additional optimizations or refinements to the VBPC approach improve performance?\", \"The current experiments may not adequately showcase the strengths of VBPC in relevant scenarios. To enhance the paper\\u2019s impact and applicability, I suggest conducting additional experiments in settings where VBPC\\u2019s efficiency gains could be more convincingly demonstrated.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Revision for Paper\", \"comment\": \"We sincerely thank all the reviewers for their valuable time and effort in helping to improve our paper and enhance its completeness through additional analyses, ablation studies, and new tasks showcasing the effectiveness of VBPC. As the discussion period concludes, we would like to share that we have incorporated the feedback and additional experiments into our revised manuscript. We promise that we will more carefully review and revise for the final manuscript.\", \"the_key_updates_in_our_revision_include\": [\"Discussion on the Laplace approximation with softmax likelihood: Sec D\", \"Discussion on the Last-Layer Approximation: Sec D\", \"Additional experiments on large datasets and continual learning: Sec F.2\", \"Additional experiments for OOD detection: Sec F.3\", \"Computational cost analysis: Sec F.4\", \"Key insights from learned images: Sec G.1\"]}", "{\"summary\": \"The paper titled \\u201cVariational Bayesian Pseudo-Coreset\\u201d introduces a novel method aimed at reducing the computational and memory challenges associated with large datasets in deep learning, particularly within the context of Bayesian Neural Networks (BNNs). The authors address limitations in prior methods of Bayesian Pseudo-Coresets (BPCs), which often face inefficiencies in memory usage and suboptimal results during training. They propose a new approach called Variational Bayesian Pseudo-Coreset (VBPC), which leverages variational inference to approximate the posterior distribution of model weights. The key innovation of VBPC is the use of a closed-form solution to compute the posterior for the last layer of BNNs, eliminating the need for complex gradient-stopping techniques used in previous BPC methods. This significantly reduces memory usage and computational load. Additionally, VBPC allows for more efficient training and inference by using a single forward pass for predictive distribution computation. Empirical evaluations demonstrate that VBPC outperforms existing BPC methods on benchmark datasets, showing improvements in both accuracy and negative log-likelihood metrics across various datasets, such as MNIST, CIFAR10, and CIFAR100. The paper contributes to the field by enhancing the efficiency and scalability of BNNs, particularly in environments that require handling large-scale data while maintaining the benefits of Bayesian inference.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a novel approach to improving the efficiency of Bayesian Neural Networks (BNNs) by combining variational inference with pseudo-coresets. This innovation is notable because it addresses the longstanding challenges in the field related to computational and memory inefficiencies when scaling BNNs to large datasets. The proposal of a closed-form solution for last-layer variational inference is a significant departure from prior methods that relied on more memory-intensive sampling-based approaches. By focusing on variational inference and pseudo-coresets, the authors provide an original contribution that builds on existing methods but removes key limitations such as memory usage and the reliance on gradient stopping.\\nThe technical rigor of the paper is high, with well-founded theoretical development and comprehensive empirical evaluations. The authors derive closed-form solutions for coreset variational inference, addressing a critical computational bottleneck in Bayesian model averaging. Their empirical results, demonstrated across multiple datasets (e.g., MNIST, CIFAR10, CIFAR100), show significant improvements over existing BPC methods in terms of both accuracy and negative log-likelihood, which strengthens the quality of their proposed method. The paper also includes a variety of comparisons with competitive baselines, reinforcing the robustness and effectiveness of their approach.\\nThe paper is clearly structured and provides sufficient background for readers to understand both the motivation and the details of the proposed method. The explanation of the problem, the limitations of prior work, and the step-by-step presentation of the VBPC approach are clear and easy to follow. The use of mathematical derivations is well-supported by intuitive explanations, making the complex variational inference approach more accessible. The inclusion of visual results and performance tables also contributes to clarity, helping readers visualize the practical benefits of VBPC.\\nThe significance of the work lies in its potential to influence how BNNs are applied to large-scale data problems. By significantly reducing the memory and computational burdens of Bayesian model averaging, the proposed VBPC method makes BNNs more feasible for real-world applications, such as those in healthcare and climate analysis, where uncertainty estimation is critical. The method could have broad implications for other fields requiring scalable, probabilistic neural networks. Additionally, the ability to perform Bayesian inference with less computational overhead enhances the practicality of deploying BNNs in production environments.\", \"weaknesses\": \"The paper focuses heavily on benchmark datasets like MNIST, CIFAR10, and CIFAR100, which are common in academic research but may not fully represent the complexity of real-world problems. While these datasets help establish baseline performance, the paper would benefit from exploring more challenging, domain-specific datasets, particularly those that are more representative of practical applications in fields such as healthcare or finance. Expanding the evaluation to datasets that feature more variability and noise could demonstrate the method\\u2019s robustness in real-world settings, which is especially important given the paper\\u2019s goal of making Bayesian Neural Networks more feasible for large-scale applications.\\n\\u0391lthough the paper demonstrates memory efficiency improvements, there is no extensive discussion of the scalability of the method when applied to very large datasets beyond those tested (e.g., ImageNet or even larger datasets in natural language processing). The paper could benefit from a more detailed analysis of the method\\u2019s behavior as the dataset size grows significantly. Additionally, the paper does not provide enough insight into the sensitivity of the method to hyperparameter choices such as the coreset size or the initialization of the model pool. It would be helpful to include an ablation study or sensitivity analysis that investigates how performance degrades with suboptimal hyperparameter choices and whether the method requires careful tuning to achieve competitive results.\\nWhile the paper emphasizes memory savings, it does not provide a thorough comparison of training times between VBPC and existing methods, particularly in scenarios with high-dimensional datasets. A more detailed analysis of wall-clock time or computational complexity across different hardware configurations (e.g., GPUs versus CPUs) would be useful. This would help practitioners better understand the trade-offs between memory savings and potential increases in computational time, especially when scaling to larger architectures and datasets.\\nThe paper relies on the last-layer variational approximation to simplify the posterior calculation, but the limitations of this approach are not thoroughly discussed. While the paper suggests that this approximation performs comparably to more complex methods, it would be valuable to include a deeper investigation of when this approximation might fail, especially in models with deep architectures or tasks requiring fine-grained uncertainty estimation. A discussion on whether the approximation is sufficient in all use cases or only certain tasks (e.g., classification versus regression) would make the paper more transparent.\\nThe paper demonstrates that VBPC can learn pseudo-coresets that effectively approximate the full dataset\\u2019s posterior, but it doesn\\u2019t provide much insight into the interpretability of these pseudo-coresets. For example, what are the learned coresets capturing in terms of dataset distribution or feature representation? A qualitative analysis, such as visualizing the pseudo-coresets or interpreting what aspects of the data they retain, would help reinforce the method\\u2019s effectiveness. Additionally, further explanation of how these pseudo-coresets evolve during training and contribute to Bayesian uncertainty could strengthen the narrative.\\nWhile the paper briefly touches on robustness to distributional shifts using CIFAR10-C, the evaluation of predictive uncertainty in real-world settings is somewhat lacking. It would be useful to see how VBPC handles more complex out-of-distribution (OOD) detection tasks or how well it captures uncertainty under adversarial conditions, which are critical aspects of Bayesian inference in high-stakes applications like healthcare. A more thorough evaluation in these contexts could elevate the practical relevance of the method.\", \"questions\": \"1. Scalability to Larger Datasets: How does VBPC perform when applied to much larger datasets, such as ImageNet or larger text datasets? Does the method scale well in terms of both computational efficiency and accuracy, or does it encounter bottlenecks?\\n\\n2. Hyperparameter Sensitivity: How sensitive is VBPC to hyperparameters such as the number of pseudo-coresets, coreset size, and model initialization? Do suboptimal hyperparameter settings lead to significant performance degradation?\\n\\n3. Computational Costs and Training Time: Can you provide a detailed comparison of training times and wall-clock time between VBPC and other methods, particularly Bayesian Pseudo-Coreset methods using SGMCMC? How does the computational time scale with increasing dataset size?\\n\\n4. Limitations of the Last-Layer Approximation: Does the last-layer variational approximation hold up in deeper networks or more complex tasks such as regression? Have you observed any failure cases where this approximation does not capture enough uncertainty?\\n\\n5. Interpretability of Pseudo-Coresets: What do the learned pseudo-coresets represent? Are they capturing key features of the original dataset, and if so, how do they evolve during training? Is there a way to interpret or visualize the coreset to provide better intuition about what is being distilled?\\n\\n6. Generalization to Different Tasks: Can the VBPC method be applied effectively to tasks beyond classification, such as regression or other types of Bayesian inference? If so, how does the method adapt to these different problem types?\\n\\n7. Robustness to Out-of-Distribution (OOD) Data and Adversarial Attacks: Does VBPC provide any robustness to adversarial attacks or strong distribution shifts beyond what is demonstrated with CIFAR10-C? How does the method perform in more severe OOD scenarios?\\n\\n8. Memory-Efficient Loss Computation: How significant is the impact of memory-efficient loss computation during training in terms of accuracy or stability? Does it introduce any trade-offs in performance, particularly in very high-dimensional settings?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate the effort and dedication you have put into reviewing our paper. As the deadline for authors to respond or engage in further discussions approaches, we are curious if you have any remaining concerns. We kindly request your feedback on our responses to address any additional questions you may have.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"__Table R.1.__ Experiments on the scalability utilizing ImageWoof and resized ImageNet datasets. Here \\u2018-\\u2019 indicates the training fails due to the out-of-memory problems.\\n\\n| Metric | ImageWoof |ipc 1 | ImageWoof| ipc 10 | ImageNet| ipc 1 | ImageNet |ipc 2 |\\n|:- |:- |:- |:- |:- |:- |:- |:- |:- |\\n| | ACC | NLL | ACC | NLL | ACC | NLL | ACC | NLL |\\n|:- |:- |:- |:- |:- |:- |:- |:- |:- |\\n| Random | 14.2 \\u00b1 0.9 | 3.84 \\u00b1 0.25 | 27.0 \\u00b1 1.9 | 2.83 \\u00b1 0.33 | 1.1 \\u00b1 0.1 | 8.32 \\u00b1 0.05 | 1.4 \\u00b1 0.1 | 8.10 \\u00b1 0.05 |\\n| BPC-CD | 18.5 \\u00b1 0.1 | 2.76 \\u00b1 0.05 | - | - | - | - | - | - |\\n| FBPC | 14.8 \\u00b1 0.1 | 3.73 \\u00b1 0.02 | 28.1 \\u00b1 0.3 | 2.69 \\u00b1 0.09 | - | - | - | - |\\n| BPC-fKL | 14.9 \\u00b1 0.9 | 3.74 \\u00b1 0.23 | 25.0 \\u00b1 0.8 | 2.90 \\u00b1 0.27 | - | - | - | - |\\n| BPC-rkL | 12.0 \\u00b1 0.5 | 6.07 \\u00b1 0.31 | - | - | - | - | - | - |\\n| VBPC | **31.2 \\u00b1 0.1** | **2.13 \\u00b1 0.04** | **39.0 \\u00b1 0.1** | **1.84 \\u00b1 0.1** | **10.0 \\u00b1 0.1** | **5.33 \\u00b1 0.04** | **11.5 \\u00b1 0.2** | **5.25 \\u00b1 0.05** |\\n\\n__Table R.2__ Experiments on the continual learning setting. Here, we utilize the CIFAR100 dataset with ipc 20 setting. We assume 5 steps during training and each step contains data from new 20 classes in the CIFAR100 dataset. Here we only report accuracy due to the variant of the number of classes during the steps.\\n| Number of Classes | 20 | 40 | 60 | 80 | 100 |\\n|---------------------------|---------------|---------------------|-------------------|-------------------|----------------|\\n|BPC-CD | 52.5 \\u00b1 2.4 | 40.4 \\u00b1 1.3 | 35.2 \\u00b1 0.8 | 33.4 \\u00b1 0.5 | 29.4 \\u00b1 0.2 |\\n|FBPC | 61.4 \\u00b1 1.8 | 53.2 \\u00b1 1.5 | 48.8 \\u00b1 0.7 | 43.9 \\u00b1 0.4 | 41.2 \\u00b1 0.3 |\\n|BPC-fKL | 51.8 \\u00b1 2.2 | 39.8 \\u00b1 1.1 | 35.5 \\u00b1 0. 7| 33.1 \\u00b1 0.5 | 29.5 \\u00b1 0.3 |\\n|BPC-rKL | 48.2 \\u00b1 2.7 | 35.5 \\u00b1 1.8 | 32.0 \\u00b1 1.0 | 29.8 \\u00b1 0.6 | 25.5 \\u00b1 0.3 |\\n|VBPC | 75.3 \\u00b1 2.0 | 65.8\\u00b1 1.5 | 57.1 \\u00b1 0.9 | 53.3 \\u00b1 0.5 | 50.3 \\u00b1 0.2 |\\n\\nReferences\\n\\n[1] Balhae Kim, Jungwon Choi, Seanie Lee, Yoonho Lee, Jung-Woo Ha, and Juho Lee. On divergence measures for bayesian pseudocoresets. In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022.\\n\\n[2] Balhae Kim, Hyungi Lee, and Juho Lee. Function space bayesian pseudocoreset for bayesian neural networks. In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), 2023.\\n\\n[3] Piyush Tiwary, Kumar Shubham, Vivek V Kashyap, and AP Prathosh. Bayesian pseudo-coresets via contrastive divergence. In The 40th Conference on Uncertainty in Artificial Intelligence, 2024.\\n\\n[4] Yongchao Zhou, Ehsan Nezhadarya, and Jimmy Ba. Dataset distillation using neural feature regression. In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022.\\n\\n[5] Noel Loo, Ramin Hasani, Mathias Lechner, and Daniela Rus. Dataset distillation with convexified implicit gradients. In Proceedings of The 39th International Conference on Machine Learning (ICML 2023), 2023.\\n\\n[6] George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A Efros, and Jun-Yan Zhu. Dataset distillation by matching training trajectories. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4750\\u20134759, 2022.\\n\\n[7] Bo Zhao and Hakan Bilen. Dataset condensation with distribution matching. CoRR, abs/2110.04181, 2021. URL https://arxiv.org/abs/2110.04181.\\n\\n[8] Jeremy Howard. A smaller subset of 10 easily classified classes from imagenet, and a little more french, 2020. URL https://github.com/fastai/imagenette/\\n\\n[9] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211\\u2013252, 2015.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"__Table R.3.__ Experiments on the scalability utilizing ImageWoof and resized ImageNet datasets. Here \\u2018-\\u2019 indicates the training fails due to the out-of-memory problems.\\n| Metric | ImageWoof |ipc 1 | ImageWoof| ipc 10 | ImageNet| ipc 1 | ImageNet |ipc 2 |\\n|:- |:- |:- |:- |:- |:- |:- |:- |:- |\\n| | ACC | NLL | ACC | NLL | ACC | NLL | ACC | NLL |\\n|:- |:- |:- |:- |:- |:- |:- |:- |:- |\\n| Random | 14.2 \\u00b1 0.9 | 3.84 \\u00b1 0.25 | 27.0 \\u00b1 1.9 | 2.83 \\u00b1 0.33 | 1.1 \\u00b1 0.1 | 8.32 \\u00b1 0.05 | 1.4 \\u00b1 0.1 | 8.10 \\u00b1 0.05 |\\n| BPC-CD | 18.5 \\u00b1 0.1 | 2.76 \\u00b1 0.05 | - | - | - | - | - | - |\\n| FBPC | 14.8 \\u00b1 0.1 | 3.73 \\u00b1 0.02 | 28.1 \\u00b1 0.3 | 2.69 \\u00b1 0.09 | - | - | - | - |\\n| BPC-fKL | 14.9 \\u00b1 0.9 | 3.74 \\u00b1 0.23 | 25.0 \\u00b1 0.8 | 2.90 \\u00b1 0.27 | - | - | - | - |\\n| BPC-rkL | 12.0 \\u00b1 0.5 | 6.07 \\u00b1 0.31 | - | - | - | - | - | - |\\n| VBPC | **31.2 \\u00b1 0.1** | **2.13 \\u00b1 0.04** | **39.0 \\u00b1 0.1** | **1.84 \\u00b1 0.1** | **10.0 \\u00b1 0.1** | **5.33 \\u00b1 0.04** | **11.5 \\u00b1 0.2** | **5.25 \\u00b1 0.05** |\\n\\n__Table R.4__ Experiments on the continual learning setting. Here, we utilize the CIFAR100 dataset with ipc 20 setting. We assume 5 steps during training and each step contains data from new 20 classes in the CIFAR100 dataset. Here we only report accuracy due to the variant of the number of classes during the steps.\\n| Number of Classes | 20 | 40 | 60 | 80 | 100 |\\n|---------------------------|---------------|---------------------|-------------------|-------------------|----------------|\\n|BPC-CD | 52.5 \\u00b1 2.4 | 40.4 \\u00b1 1.3 | 35.2 \\u00b1 0.8 | 33.4 \\u00b1 0.5 | 29.4 \\u00b1 0.2 |\\n|FBPC | 61.4 \\u00b1 1.8 | 53.2 \\u00b1 1.5 | 48.8 \\u00b1 0.7 | 43.9 \\u00b1 0.4 | 41.2 \\u00b1 0.3 |\\n|BPC-fKL | 51.8 \\u00b1 2.2 | 39.8 \\u00b1 1.1 | 35.5 \\u00b1 0. 7| 33.1 \\u00b1 0.5 | 29.5 \\u00b1 0.3 |\\n|BPC-rKL | 48.2 \\u00b1 2.7 | 35.5 \\u00b1 1.8 | 32.0 \\u00b1 1.0 | 29.8 \\u00b1 0.6 | 25.5 \\u00b1 0.3 |\\n|VBPC | 75.3 \\u00b1 2.0 | 65.8\\u00b1 1.5 | 57.1 \\u00b1 0.9 | 53.3 \\u00b1 0.5 | 50.3 \\u00b1 0.2 |\\n\\n\\nReferences\\n\\n[1] Balhae Kim, Jungwon Choi, Seanie Lee, Yoonho Lee, Jung-Woo Ha, and Juho Lee. On divergence measures for bayesian pseudocoresets. In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022.\\n\\n[2] Balhae Kim, Hyungi Lee, and Juho Lee. Function space bayesian pseudocoreset for bayesian neural networks. In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), 2023.\\n\\n[3] Piyush Tiwary, Kumar Shubham, Vivek V Kashyap, and AP Prathosh. Bayesian pseudo-coresets via contrastive divergence. In The 40th Conference on Uncertainty in Artificial Intelligence, 2024.\\n\\n[4] Yongchao Zhou, Ehsan Nezhadarya, and Jimmy Ba. Dataset distillation using neural feature regression. In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022.\\n\\n[5] Noel Loo, Ramin Hasani, Mathias Lechner, and Daniela Rus. Dataset distillation with convexified implicit gradients. In Proceedings of The 39th International Conference on Machine Learning (ICML 2023), 2023.\\n\\n[6] George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A Efros, and Jun-Yan Zhu. Dataset distillation by matching training trajectories. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4750\\u20134759, 2022.\\n\\n[7] Bo Zhao and Hakan Bilen. Dataset condensation with distribution matching. CoRR, abs/2110.04181, 2021. URL https://arxiv.org/abs/2110.04181.\\n\\n[8] Jeremy Howard. A smaller subset of 10 easily classified classes from imagenet, and a little more french, 2020. URL https://github.com/fastai/imagenette/\\n\\n[9] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211\\u2013252, 2015.\"}", "{\"title\": \"Rebuttal by authors\", \"comment\": \"**[Q1] Performance is lower than that of state-of-the-art classifiers. Could additional optimizations or refinements to the VBPC approach improve performance?**\\n\\nThank you for the considerable effort and assistance you've put into reviewing our paper. However, it seems there may be a misunderstanding regarding this particular weakness you pointed out. The goal of our method is not to develop a state-of-the-art model for a specific dataset, nor is it to create an inference method that achieves higher performance by leveraging existing state-of-the-art models. Rather, our research focuses on effectively summarizing a large volume of training data into a minimal yet well-representative set of data points, thereby reducing the computational and memory burdens needed for learning.\\n\\nFor example, in the case of CIFAR10, we demonstrated that our method could achieve a strong accuracy of 55% using only 10 images (which is just 0.2% of the training data) instead of the original 60,000 images. Furthermore, the model and dataset setups we use are consistent with the benchmark configurations employed in various dataset distillation and Bayesian pseudo-coreset studies, such as [1,2,3,4,5,6]. These studies, for the sake of fair comparison, fix model architecture to certain layer sizes and kernel sizes, which, of course, results in models that may not match the performance of SOTA models. However, our VBPC method could indeed be practically utilized in conjunction with SOTA models to learn a pseudo-coreset.\\n\\nNotably, our method requires only the last layer for variational inference, making it significantly easier to apply to large models such as ViTs compared to existing Bayesian pseudo-coreset methods. A major drawback of previous BPC methods is that they require a pre-trained target model (e.g., ViT) along with a large number of expert trajectories obtained by training the ViT model multiple times with different random seeds. In contrast, our method does not require pre-training multiple ViT models, making it a much more efficient approach to pseudo-coreset learning.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q4] Limitations of the Last-Layer Approximation**\\n\\nThank you for the insightful comment. As you pointed out, there might be concerns that considering the posterior distribution of only the last layer weights, rather than the entire parameter set, could limit the model's ability to capture uncertainty effectively, especially as the model size increases and tasks become more complex. We fully agree that this is a valid concern and would like to provide a discussion based on related findings.\\n\\nSpecifically, [5] provides extensive empirical evidence on the effectiveness of last-layer variational inference. Their experiments span diverse tasks, including regression with UCI datasets, image classification using a Wide ResNet model, and sentiment classification leveraging LLM features from the OPT-175B model. They compared their method with other Bayesian inference approaches such as Dropout, Ensemble methods, and Laplace approximation for the full model. Their results demonstrate that even though last-layer variational inference focuses solely on the final layer weights, it achieves performance comparable to other comprehensive Bayesian inference techniques across various tasks.\\n\\nThese findings indicate that while conducting Bayesian inference on the full set of weights in a neural network could potentially lead to more precise uncertainty estimation, employing last-layer variational inference is still effective in capturing meaningful uncertainty. \\n\\nWe believe that extending VBPC to incorporate full-weight variational inference could be a promising direction for future work, offering the potential to further enhance the method's uncertainty estimation capabilities. We will include this discussion in the final manuscript to provide a balanced perspective and acknowledge possible avenues for improvement.\\n\\n**[Q5] Interpretability of Pseudo-Coresets**\\n\\nThank you for asking about the interpretability of the VBPC images. Regarding the learned pseudo-coreset images for CIFAR10, the results can be found in Figure 1 of the main paper and Figure 12 in the appendix, showing the outcomes for ipc values of 1 and 10. These images reveal several interesting aspects of how VBPC captures information.\\n\\nFirst, both ipc 1 and ipc 10 images show that VBPC effectively learns features associated with specific classes, such as \\\"horse\\\" or \\\"automobile,\\\" as can be visually confirmed. This indicates that the pseudo-coreset images retain class-relevant information necessary for approximating the original dataset\\u2019s posterior distribution. When comparing ipc 1 and ipc 10, there are notable differences. In the case of ipc 1, where only a single image per class is available, VBPC attempts to encode as many class-specific features as possible into a single image. As a result, the learned image appears to incorporate multiple discriminative features from the class symmetrically. In contrast, with ipc 10, where more images per class are available, VBPC distributes the class-relevant features across multiple images. This leads to a greater diversity of features being captured across the pseudo-coreset, enabling a more comprehensive representation of the class.\\n\\nAdditionally, both ipc 1 and ipc 10 images often include low-level features beyond the main class-relevant ones. These features likely help capture the dataset's variability and ensure the learned pseudo-coreset maintains a close approximation of the original data distribution. \\n\\nThese observations suggest that VBPC is effective in compressing the dataset while retaining essential information. The learned images illustrate how VBPC balances feature extraction and information retention to ensure that the variational posterior distribution learned using the pseudo-coreset closely approximates the one learned using the full dataset. This further validates the interpretability and utility of VBPC in various tasks.\"}", "{\"comment\": \"Thank you for the positive review of our paper. We will organize the experiments and discussions conducted during the discussion period and incorporate them into the final manuscript.\"}" ] }
0N8yq8QwkD
Mani-GS: Gaussian Splatting Manipulation with Triangular Mesh
[ "Xiangjun Gao", "Xiaoyu Li", "Yiyu Zhuang", "Qi Zhang", "Wenbo Hu", "Chaopeng Zhang", "Yao Yao", "Ying Shan", "Long Quan" ]
Neural 3D representations such as Neural Radiance Fields (NeRFs), excel at producing photo-realistic rendering results but lack the flexibility for manipulation and editing which is crucial for content creation. Previous works have attempted to address this issue by deforming a NeRF in canonical space or manipulating the radiance field based on an explicit mesh. However, manipulating NeRF is not highly controllable and requires a long training and inference time. With the emergence of 3D Gaussian Splatting (3DGS), extremely high-fidelity novel view synthesis can be achieved using an explicit point-based 3D representation with much faster training and rendering speed. However, there is still a lack of effective means to manipulate 3DGS freely while maintaining rendering quality. In this work, we aim to tackle the challenge of achieving manipulable photo-realistic rendering. We propose to utilize a triangular mesh to manipulate 3DGS directly with self-adaptation. This approach reduces the need to design various algorithms for different types of Gaussian manipulation. By utilizing a triangle shape-aware Gaussian binding and adapting method, we can achieve 3DGS manipulation and preserve high-fidelity rendering after manipulation. Our approach is capable of handling large deformations, local manipulations, and even physics simulations while keeping high-quality rendering. Furthermore, we demonstrate that our method is also effective with inaccurate meshes extracted from 3DGS. Experiments conducted on NeRF synthetic datasets demonstrate the effectiveness of our method and its superiority over baseline approaches.
[ "Editable Rendring; 3DGS; Differential Rendering" ]
https://openreview.net/pdf?id=0N8yq8QwkD
https://openreview.net/forum?id=0N8yq8QwkD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xG2Iwe1NNc", "sF4yIpmiQR", "kmvAuxnIuY", "hnW6ruBERn", "Q7p4EtQusO", "NtHMjGmsGx" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730592977270, 1730180013517, 1730633377120, 1730179572868, 1730354710074, 1731512462607 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9059/Reviewer_XDBV" ], [ "ICLR.cc/2025/Conference/Submission9059/Reviewer_1SJC" ], [ "ICLR.cc/2025/Conference/Submission9059/Reviewer_EcEp" ], [ "ICLR.cc/2025/Conference/Submission9059/Reviewer_6rXy" ], [ "ICLR.cc/2025/Conference/Submission9059/Reviewer_umHH" ], [ "ICLR.cc/2025/Conference/Submission9059/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper utilizes a given triangular mesh for free-form shape deformation of Gaussian-splatting with self-adaption. By parameterizing each Gaussian in the local triangle space and decoupling the local and global transformations, the proposed method maintains the local rigidity and preserves the relative location between Gaussian, which is robust to inaccurate meshes. The authors demonstrate the method editability in three tasks, large deformation, local manipulation, and soft body simulation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well written and straightforward.\\n2. The paper proposes to bind Gaussians to a local triangle space, which maintains the local rigidity and preserves the relative\\nlocation between Gaussians, allowing the method to preserve the high-fidelity rendering results after manipulation.\\n2. The manipulation results are vivid and interesting, especially the soft body simulation.\\n3. The authors demonstrate the editability of the method on three different tasks, which shows the capability of the method in various scenarios.\", \"weaknesses\": \"1. The authors need to compare their method with GaMeS or Mesh-GS to demonstrate their contributions. In comparison to GaMeS which constrains the Gaussians on the surface exactly, the main contributions of Mani-GS are (1) attaching the Guassians to local space rather than global space, and (2) allowing Gaussians to offset out of the attached triangle. Could the authors provide some qualitative results that support those two contributions? In terms of the rendering quality given an inaccurate mesh and the rendering quality after manipulation. For example, given the Poisson mesh in Fig.8, where part of the pot is missing, Mani-GS can better fill the missing part than GaMeS since it allows the offset. And for example, show a case where the rendering quality of Mani-GS is better than GaMeS after manipulation due to the local triangle space.\\n\\n2. In line 333 authors propose to use an adaption vector to scale both the offset vector and the scale of the Gaussian. However, the adaption vector solely depends on the length of the three triangle edges. Imagine stretching a triangle along its plane, e1, e2, e3 will all increase as the edge lengths get larger. Since e2 increases, the offset of the Gaussian along the triangle's normal direction will get larger, the Gaussian will move farther from the plane. A concrete example could be the Poisson mesh in Fig. 8, since part of the pot is missing, to be able to reconstruct the pot there must be lots of Gaussians that have large offsets along the normal directions, in that case if you stretch the pot vertically, I'd expect the Gaussians to expand horizontally as well. Does this lead to artifacts empirically? I'm happy to hear any comments on this. \\n\\n3. Is the mesh used in Table 1 extracted from SuGaR? If not what's the mesh used there and could you provide the results using the SuGaR mesh for a fair comparison? If yes seems the average PSNR is different from what is mentioned later in line 505.\\n\\n4. Do you regularize the scale of the local position \\\\mu? I'm concerned that a Gaussian could significantly offset the attached triangle, potentially causing artifacts after manipulation.\\n\\n5. In line 44 NeRF-Editing is referred to as Yuan et al. 2022, but in the rest of the paper (for example lines 365 and 379) it becomes Liu et al. 2021. The former approach is more relevant for comparison, as it aligns more closely with the context of free-form shape deformation, which the latter approach does not directly address. Is it a typo?\", \"questions\": \"Please check the weakness section. My main concern is the comparison with GaMeS or Mesh-GS.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This submission describes an approach for deforming a 3D scene represented by 3D Gaussians. Towards this goal, the proposed method extracts a triangle mesh, binds the 3D Gaussians representing the scene to the mesh (on- and off-surface), and then uses the mesh to drive rigid and non-rigid deformations of the 3D Gaussians for object deformation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is reasonably well written and easy to understand. It addresses the important challenge of editing 3D scenes represented by 3D Gaussians. The qualitative results look compelling and quantitatively outperform the sugar baseline.\", \"weaknesses\": \"There are several weaknesses:\\n\\n1. The proposed methods is incremental compared with sugar. The proposed method is basically sugar, which binds the optimized 3D Gaussians to the mesh surface. with an additional offset. This seems like a simple extension. More advanced extensions of sugar already exists, including \\n\\nGuedon and Lepetit, Gaussian Frosting: Editable Complex Radiance Fields with Real-Time Rendering, ECCV 2024\\n\\nwhich model a much broader class of objects than both sugar and the proposed work.\\n\\n2. The related work discussion is too focused on NeRF, instead of giving a more comprehensive snapshot of approaches that enable animatable / deformable 3D Gaussians. A few examples:\\n\\nHuang and Yu, GSDeformer: Direct Cage-based Deformation for 3D Gaussian Splatting\\nAbdal et al., Gaussian Shell Maps for Efficient 3D Human Generation, CVPR 2024\\nYang et al., Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction, CVPR 2024\\n\\nSo relevant papers published at CVPR 2024, ECCV 2024, and also SIGGRAPH Asia 2024\\n\\n3. Some claims on the capabilities of the proposed system seem exaggerated\\n\\nThe presented results look good, but they mainly show local and non-rigid deformations. I did not see examples that show the claimed \\\"large deformations\\\" (see g.g. abstract & introduction)\", \"questions\": \"1. Can you please summarize additional recent baselines, including Gaussian Frosting, and compare against those?\\n\\n2. Please show large deformations or avoid claiming them.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors improve the methodology of manipulating renderings generated by 3D Gaussian Splatting (3DGS). To achieve this, they propose the use of triangular mesh (generated by NeuS) as initial input to the 3DGS. Additionally, the authors propose a triangle-aware 3DGS to improve the manipulation and rendering capability.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The strengths of this work rely mainly on addressing a highly relevant problem and achieving great values compared to their chosen state-of-the-art methods.\", \"weaknesses\": [\"This work suffers from a few larger issues:\", \"Poor writing quality. Here, we mostly mean that the paper heavily introduces and talks about NeRF in the introduction and related work, while this is not relevant for understanding the paper. Further, structurally the paper needs some improvements (for example Figure is mentioned on page 4 but not seen until page 6)\", \"In general, while the method works decently, the contributions do not seem to be enough\", \"Compared to SuGaR, the author here uses better meshes that are generated from NeuS (higher training time). The authors should address the differences in the training in their work.\"], \"questions\": [\"On line 505, you mention that the results using SuGaR mesh are 33.676 dB (You + SuGaR), which is higher than 33.34 (You + NeuS). Why use NeuS if this is the case? If this is the case, the contribution on a quantitative level does not seem to be significant.\", \"In Table 1, please add, if possible, the rendering results of NeuS so that it can be seen how much the authors improve on the work of NeuS.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This article focuses on the editing of Gaussians. Its central idea is to first perform mesh modeling of the scene, then bind 3DGS to the mesh for topology-consistent training. To better bind Gaussians to the mesh\\u2019s triangular surfaces, the paper proposes a coordinate system definition method based on triangles, allowing the topology to maintain a more stable structure. Once completed, this enables Gaussian editing and simulation similar to mesh manipulation. The authors conducted experiments on NeRF synthetic data and DTU data, achieving the expected editing effects.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The article has a clear logic and provides an in-depth analysis of the problem. For example, when discussing how to bind Gaussians to the mesh (Sec.3.3), the authors compared two alternative methods (\\\"Gaussians on Mesh\\\" and \\\"Gaussians on Mesh with Offset\\\"), analyzing the principles, advantages, and disadvantages of each. Another example is the authors' discussion of the results from different mesh extraction methods (Sec.3.2).\\n2. The supplementary materials are meticulously prepared, and the demo presentation is impressive, showcasing excellent results on editing and simulating.\", \"weaknesses\": \"1. The discussion of some works is insufficient. For example, GaMeS and Mesh-GS are mentioned in the related work section, but as the most closely related and recent Gaussian methods, they are not included in the experimental comparisons. Methodologically, I feel that the Gaussian binding approach in this article is very similar to that of GaMeS, yet the authors do not discuss this point. The baselines the authors compare are outdated and are insufficient to demonstrate the superiority of their method.\\n2. The range of data types this article's method can be applied to is not diverse enough. From the authors' experiments, it currently only supports the editing of small objects and relies heavily on the topology mesh obtained from mesh reconstruction algorithms (e.g., NeuS). If the object becomes more complex or includes a complex background, this approach is likely to produce a degraded-quality mesh, making it impossible to proceed with subsequent binding operations.\", \"questions\": \"1. There are some citation errors, such as the reference to Mesh-GS (Waczynska et al., 2024), which actually pertains to the GaMeS paper.\\n2. Have you tried testing your method on real-world data with backgrounds, such as the LLFF dataset? How effective is it in such cases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work proposes a method for 3D Gaussian manipulation using a mesh as a proxy. By defining a local coordinate system for each triangle, the paper associates Gaussians with each triangle in a self-adaptive manner. The paper is clearly illustrated and thoroughly demonstrated.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper is well written, and the analysis is comprehensive.\", \"The idea of correlating the scale of 3D Gaussians with the shape of the triangles to better handle large-scale deformations is reasonable.\", \"The experimental results appear to be valid.\"], \"weaknesses\": [\"3D Gaussian Spatting achieves high-quality rendering results primarily due to its split/clone mechanism, which adaptively adjusts the number of points in the scene. However, this paper limits the number of Gaussians in each triangle face, which may restrict its fitting capability. Nevertheless, the rendering metrics in Table 1 appear to be very high, with some even exceeding those of the original 3DGS; this raises questions.\", \"The main innovation of this paper lies in the introduction of $e$ in Equation 7 to better handle large-scale deformations. However, this is not evident in the ablation study. In fact, both the 3DGS on Mesh and Mesh + Offset experiments seem not to address the rotation of Gaussians, which is unreasonable.\", \"The current experimental examples are focused on hard surfaces. However, the greater advantage of 3DGS compared to meshes lies in rendering scenes without well-defined surfaces. How does this method perform on fuzzy geometry (e.g., the data from \\\"Adaptive Shells for Efficient Neural Radiance Field Rendering\\\")?\"], \"questions\": \"The training of 3DGS in the paper is conducted entirely in static scenes, which fails to effectively learn the correspondence that Gaussian and mesh should maintain during motion. If a 3D Gaussian is trained separately and then matched to the mesh surface (transforming coordinates from world space to the local space of each triangle), can good manipulation still be achieved?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
0MhlzybvAp
Balanced Learning for Domain Adaptive Semantic Segmentation
[ "Wangkai Li", "Rui Sun", "Bohao Liao", "Zhaoyang Li", "Tianzhu Zhang" ]
Unsupervised domain adaptation (UDA) for semantic segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain, improving model performance on the target dataset without additional annotations. Despite the effectiveness of self-training techniques in UDA, they struggle to learn each class in a balanced manner due to inherent class imbalance and distribution shift in both data and label space between domains. To address this issue, we propose Balanced Learning for Domain Adaptation (BLDA), a novel approach to directly assess and alleviate class bias without requiring prior knowledge about the distribution shift between domains. First, we identify over-predicted and under-predicted classes by analyzing the distribution of predicted logits. Subsequently, we introduce a post-hoc approach to align the positive and negative logits distributions across different classes using anchor distributions and cumulative density functions. To further consider the network's need to generate unbiased pseudo-labels during self-training, we couple Gaussian mixture models to estimate logits distributions online and incorporate logits correction terms into the loss function. Moreover, we leverage the resulting cumulative density as domain-shared structural knowledge to connect the source and target domains. Extensive experiments on two standard UDA semantic segmentation benchmarks demonstrate that BLDA consistently improves performance, especially for under-predicted classes, when integrated into existing methods. Our work highlights the importance of balanced learning in UDA and effectively mitigates class bias in domain adaptive semantic segmentation.
[ "Semantic segmentation" ]
Reject
https://openreview.net/pdf?id=0MhlzybvAp
https://openreview.net/forum?id=0MhlzybvAp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zQjx6rhT70", "Xg9TxovbOo", "VWUbty6Nfj", "KwgoXaUiHY", "CjQxc7S33N", "AhoA2T0WTP", "8FW56kyOHg" ], "note_type": [ "official_review", "official_comment", "official_review", "official_review", "meta_review", "official_review", "decision" ], "note_created": [ 1730897457464, 1732586193669, 1730948127182, 1730726476462, 1733917709826, 1729956278476, 1737524000314 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9695/Reviewer_zNzB" ], [ "ICLR.cc/2025/Conference/Submission9695/Reviewer_zNzB" ], [ "ICLR.cc/2025/Conference/Submission9695/Reviewer_9gg7" ], [ "ICLR.cc/2025/Conference/Submission9695/Reviewer_Z47K" ], [ "ICLR.cc/2025/Conference/Submission9695/Area_Chair_z4DV" ], [ "ICLR.cc/2025/Conference/Submission9695/Reviewer_K43a" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper addresses the challenge of class imbalance in unsupervised domain adaptation (UDA) for semantic segmentation, where labeled source data is used to improve the model\\u2019s performance on an unlabeled target dataset. The authors propose a Balanced Learning for Domain Adaptation (BLDA) technique that aligns class predictions by analyzing and adjusting predicted logit distributions, even without prior knowledge of distribution shifts. BLDA enhances UDA model performance by mitigating class bias, particularly for under-represented classes, leading to more accurate segmentation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The motivation is clear, with a thorough statistical analysis of the class bias issue in unsupervised domain adaptation (UDA) for semantic segmentation (Figures 1 and 2).\", \"The paper is generally well-written, well-structured, and easy to follow.\", \"The proposed method comprises four modules. Although each module is simple and widely used in the machine learning field (e.g., GMM and alignment with anchor distributions), these techniques are effective in addressing issues found in this task.\", \"The experiments are comprehensive, covering three transfer tasks for segmentation, an additional image classification task (included in the supplementary materials), and extensive qualitative analyses.\"], \"weaknesses\": \"1. The proposed method is computationally heavy, as it includes an additional regression head with extra training objectives and requires GMM updates via EM algorithms. Consequently, this approach may incur significantly more computation time and memory usage than baseline methods.\\n\\n2. In Tables 1, 2, and 4, all existing methods equipped with BLDA are outdated. It remains questionable whether current SOTA methods (in 2023 and 2024) are sufficient to address prediction bias issues.\", \"questions\": \"1. For weakness 1, could you conduct a theoretical complexity analysis comparing the proposed BLDA with the baseline? Additionally, please report and analyze the actual inference time, training time, and memory usage, along with a comparison to baseline methods (without adding BLDA).\\n\\n2. For weakness 2, could you integrate BLDA into recent UDA segmentation methods [A], [B], [C], and [D]?\\n\\n3. The mentioned works are highly relevant but lack citations in this paper. Could you update Section 2.1 (Related Work) to include all necessary references?\\n\\n[A] Focus on Your Target: A Dual Teacher-Student Framework for Domain-adaptive Semantic Segmentation\\n[B] CDAC:Cross-domain Attention Consistency in Transformer for Domain Adaptive Semantic Segmentation\\n[C] Diffusion-based Image Translation with Label Guidance for Domain Adaptive Semantic Segmentation\\n[D] Learning Pseudo-Relations for Cross-domain Semantic Segmentation\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed response. Most of my concerns have been addressed, and I will therefore maintain my current positive rating.\"}", "{\"summary\": \"This paper proposes a novel approach called BLDA to address class bias in domain adaptation for semantic segmentation tasks. It first evaluates prediction bias across different classes by analyzing the network's logits distribution. Then, a a post-hoc method is designed to adjust logits distributions after training. With the logits changes, a real-time logits values adjustment module is proposed by using GMMs to estimate logits distribution parameters online. The author then introduces cumulative density estimation as shared structural knowledge to connect the source and target domains. An additional regression head in the network predicts the cumulative distribution value of samples, which represents class discriminative capability, further enhancing adaptation performance on semantic segmentation tasks. The results in the experiments shows its effectiveness as a module addition to selected existing DA for segmentation baselines.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. This paper provide a new way to measure the class distribution changes in semantic segmentation by the logits distribution.\\n2. The proposed module could easily be applied to existing UDA for semantic segmentation methods, potentially have a broad use in this area.\\n3. The proposed module is generally effective on most of the classes in the two benckmark tasks.\\n4. The visual aid is good, provide an intuition of the motivation, also demostrates the effectiveness of the proposed module.\", \"weaknesses\": \"1. The proposed method relies on the logits distribution. However, this distribution can be affected by data quality and model architecture, which can affect the accuracy of bias assessment.\\n2. As a DA for segmantation task, a very severe issue is its efficiency concern. Adaptation process already cost a lot of time and computational resources, the proposed method seems exacerbated this issue by multiple GMMs. An efficiency study including wall-clock time or other efficiency measurement will be good to discuess the trade-offs between class-balanced performance and the actual cost.\\n3. If the anchor distribution is far away from the true distribution of the target domain, logits alignment may be suboptimal, meaning if the domain gap is large, this part may be not work.\\n4. As a module proposed rather than a whole algorithm, its effectiveness is expected to be confirmed on a considerable large amount of baselines methods, however, only few of them are studied and compared only for Transformer-based methods. I would recommand to evaluate on more baselines such as [1][2][3] and backbones (such as Deeplab v2 Deeplab V3+, for methods such as ProDA) to conform its effectiveness. especially those even have more severe class-imbalance issues.\\n5. There exist a huge amount of methods or loss functions targeting class-imbalanced issue (for or not for semantic segmentation), some need in related works and some need a experiments for comparison, but only few of them listed and discussed. \\n6. Since the classes have been categorised as over/under predicted, group them in the experiments and study would be better to understand the module effectiveness on classes with different characteristics.\\n\\nI will scoring up or down based on the author's reply.\\n\\n[1]. Domain adaptive semantic segmentation by optimal transport\\n\\n[2]. DiGA: Distil to Generalize and then Adapt for Domain Adaptive Semantic Segmentation\\n\\n[3]. Prototypical contrast adaptation for domain adaptive semantic segmentation\", \"questions\": \"See the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper discusses the unsupervised domain adaptation problem in semantic segmentation tasks. The method first identifies unbalanced classes by analyzing the predicted logits. Then, it aligns the distributions using a preset anchor distribution. Finally, it also adopts a Gaussian mixture model to estimate logits online to generate unbiased pseudo-labels for self-training. Experiments are conducted on the classic GTAv/SYNTHIA to Cityscapes benchmark for evaluation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow. The figures clearly show the distribution trends to help understand the core idea.\\n2. There are many formula languages to describe the proposed method precisely.\\n3. The experiments on the GTAv/SYNTHIA/Cityscapes benchmark show clear improvements over baseline methods.\", \"weaknesses\": \"1. The novelty is limited. The data distribution problem is not newly recognized, and the proposed method adopting anchor distributions for alignment and GMM for unbiased generation is also explored by previous methods. For example, the following papers [a-d] also adopt anchors and/or GMM methods to cross-domain alignment. Please consider providing more discussion with these related works.\\n2. The method is only verified on a relatively small-scale benchmark. The compared works are from two years ago, which cannot prove this work's value to today's more advanced semantic segmentation approaches. Please consider providing more analysis with other datasets to prove the generalization ability of the method. Optional datasets such as Vistas, IDDA, BDD100k, and VIPER.\\n\\n[a] Multi-Anchor Active Domain Adaptation for Semantic Segmentation\\n\\n[b] Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation\\n\\n[c] ProtoGMM: Multi-prototype Gaussian-Mixture-based Domain Adaptation Model for Semantic Segmentation\\n\\n[d] Uncertainty-aware Pseudo Label Refinery for Domain Adaptive Semantic Segmentation\", \"questions\": \"Please refer to the weaknesses for details. Due to the concerns of the novelty and potential impact, the reviewer is inclined to rate a borderline reject.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The work proposes a novel approach namely BLDA to tackle class bias in unsupervised domain adaptation for semantic segmentation. The method analyzes logits distributions to assess class imbalance, employs Gaussian Mixture Models (GMMs) to adjust logits online, and utilizes cumulative density estimation to align source and target domains. Extensive experiments demonstrate its effectiveness as a plug-and-play module, with improvements in segmentation performance across diverse datasets and baselines. Strengths of the paper include its clear motivation and comprehensive experimentation. However, the novelty of the proposed approach is somewhat limited due to similarities with prior works that use GMMs or anchor-based approaches. Reviewers also questioned its computational inefficiency and noted a lack of validation on larger or more diverse benchmarks. The authors have proactively addressed most concerns on experiments, yet the core contributions remain marginal to reach the publication bar of ICLR.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion, reviewers raised concerns about novelty, computational cost, and generalizability. Specific issues included the similarity to prior GMM-based methods, insufficient evaluation on recent baselines, and limited benchmarks. The authors responded with detailed explanations, providing theoretical complexity analysis, efficiency improvements, and validation on additional datasets such as VIPER and BDD. These efforts demonstrated the method\\u2019s practical applicability and clarified its unique contributions to addressing class bias in UDA.\\n\\nHowever, reviewers like 9gg7 and Z47K remained unconvinced, noting the incremental novelty and suboptimal choice of benchmarks. Despite thorough rebuttals and additional experiments, the reviewers keep their initial ratings due to their doubts about the paper's broader impact and relevance to current SOTA methods. These considerations ultimately lead to the decision to reject, while acknowledging the potential of the work with further development and validation.\"}", "{\"summary\": \"This paper introduces a BLDA method to address class-imbalanced problem in unsupervised domain adaptive semantic segmentation. BLDA analyzes the distribution of predicted logits to assess class prediction bias and proposes an online logits adjustment mechanism to balance class learning in both source and target domains. The method incorporates Gaussian Mixture Models (GMMs) to estimate logits distributions and aligns them with anchor distributions using cumulative density functions. Extensive experiments on standard UDA semantic segmentation benchmarks demonstrate significant performance improvements.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The class-imbalanced is an important issue in DASS, and this paper provides a novel method to tackle this problem by aligning the logits distributions of all classes with anchor distributions to achieve balanced prediction.\\n\\n2. Extensive experiments have demonstrated the effectiveness of the proposed method.\", \"weaknesses\": \"1. The paper claims a key contribution in proposing a post-hoc class balancing technique to adjust the network's predictions by establishing two anchor distributions, $P_p$ for positive predictions and $P_n$ for negative predictions. However, the paper lacks sufficient explanation regarding the selection criteria for these anchor distributions, which raises questions about the method's validity and soundness.\\n\\n2. The current approach in this paper aligns the positive and negative distributions to anchor distributions as part of the post-hoc class balancing strategy. However, based on my understanding, this alignment may not effectively address label noise\\u2014a crucial aspect of self-training where pseudo label denoising is often central to performance improvement. Instead, recent studies [1,2] have demonstrated the utility of negative pseudo labeling, showing that leveraging negative information more directly can enhance model robustness and reduce noise. Clarification on the rationale for this alignment-based approach, especially in comparison to existing negative pseudo-labeling methods, would help to justify the method\\u2019s efficacy and theoretical basis in the context of label noise mitigation.\\n\\n[1]. Domain Adaptive Semantic Segmentation without Source Data\\n\\n[2]. A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation\", \"questions\": \"## Some questions in Figure 3:\\n\\n1. Figure 3 presents the logit distributions for positive and negative samples; however, the lack of labeled x- and y-axes in the figure makes it challenging to interpret these distributions effectively. \\n2. There is no clear explanation of the direction of reweighting and resampling applied to the logit distributions. This omission makes it difficult to understand the intended insights from Figure 3, as well as the overall method\\u2019s mechanism and impact on balancing. \\n\\n3. There are a few grammatical errors, such as the \\\"Discusiion\\\" in L307.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
0MVWOHwHDb
Retrieval-Augmented Language Model for Knowledge-aware Protein Encoding
[ "Jiasheng Zhang", "Delvin Ce Zhang", "Shuang Liang", "Zhengpin Li", "Rex Ying", "Jie Shao" ]
Protein language models often struggle to capture the biological functions encoded within protein sequences due to their lack of factual knowledge (e.g., gene descriptions of proteins). Existing solutions leverage protein knowledge graphs (PKGs), using knowledge as auxiliary encoding objectives. However, none of them explored the direct injection of correlated knowledge into protein language models, and task-oriented knowledge integration during fine-tuning, making them suffer from insufficient knowledge exploitation and catastrophic forgetting of pre-trained knowledge. The root cause is that they fail to align PKGs with downstream tasks, forcing their knowledge modeling to adapt to the knowledge-isolated nature of these tasks. To tackle these limitations, we propose a novel knowledge retriever that can accurately predict gene descriptions for new proteins in downstream tasks and thus align them with PKGs. On this basis, we propose Knowledge-aware retrieval-augmented protein language model (Kara), achieving the first unified and direct integration of PKGs and protein language models. Using the knowledge retriever, both the pre-training and fine-tuning stages can incorporate knowledge through a unified modeling process, where contextualized virtual tokens enable token-level integration of high-order knowledge. Moreover, structure-based regularization is introduced to inject function similarity into protein representations, and unify the pre-training and fine-tuning optimization objectives. Experimental results show that Kara consistently outperforms existing knowledge-enhanced models in 6 representative tasks, achieving on average 5.1% improvements.
[ "Knowledge Graphs; Protein Science; Representation Learning" ]
Reject
https://openreview.net/pdf?id=0MVWOHwHDb
https://openreview.net/forum?id=0MVWOHwHDb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xiSS8eL3Mw", "wtzlOFomPL", "wbKwzArTU8", "uoZtqPWkoF", "tDsnucNWPn", "owAbvGJVh0", "og5GuwzkPK", "oYpJ3rtvja", "ljsW9QB9ke", "edLOvYdEPs", "ZOCaBpjos7", "Om2A3cc3qU", "NGn1Bg5OE2", "M0d7YttC21", "LcjQ7srJh4", "KOfhA8ltPR", "I0nk9xLcZA", "H18C9xhWgw", "GpFS0ltQjw", "ASFqK6876e", "8QuHVwcxhK", "7Net28JSdf" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730551041611, 1731868287327, 1730648763088, 1731870957148, 1730723180843, 1732483242444, 1731892306648, 1731871464647, 1730388550716, 1732483319595, 1733080992793, 1732506919570, 1734955503890, 1732715446679, 1737523848110, 1732483406935, 1732486541773, 1732483450145, 1731865025679, 1731867195477, 1731869166732, 1731869850133 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7571/Reviewer_XfFc" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Reviewer_iHdH" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Reviewer_4Ji4" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Reviewer_JTXJ" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Area_Chair_PeZy" ], [ "ICLR.cc/2025/Conference/Submission7571/Reviewer_XfFc" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Reviewer_iHdH" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ], [ "ICLR.cc/2025/Conference/Submission7571/Authors" ] ], "structured_content_str": [ "{\"summary\": \"How to effectively transfer knowledge from knowledge graphs to large language model is a challenging task. In this paper, the authors is the first to propose a novel knowledge retriever, named Kara, that directly injects the correlated knowledge into protein language models and aligns the protein knowledge graph with downstream tasks. Specifically, the contextualized virtual tokens is designed to enable the direct injection of knowledge and high-order structure information into protein representations. Extensive experimental results, arranging from amino acids contact prediction to semantic similarity inference, demonstrate the superior performance of proposed Kara.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"In general, the paper is clearly expressed and organized. The authors' innovation of direct injecting the protein knowledge graph into large language model to explore the knowledge-aware protein representation learning, which will have some implications for the biomedical field. In addition, the experiments in the discussion section demonstrate that the virtual tokens and structure-based regularization are good at capturing high-order information of protein knowledge graph from a novel perspective.\", \"weaknesses\": \"The Introduction needs to provide more background information, such as the specific role of Knowledge Graphs (KGs) in this context, the benefits they offer, and the rationale behind exploring KG-based methods.\", \"questions\": \"1.How is the ProteinKG25 knowledge graph selected? There are many other well-known protein-related multi-omics knowledge graphs, such as PharmKG (Briefings in bioinformatics, 2021), CKG (Nature biotechnology, 2022). Do the types and numbers of entities and relationships affect model performance?\\n\\n2.The knowledge graph contains only positive samples for interaction-based tasks. Did the authors incorporate negative sampling during training? If so, please provide additional details on how this was implemented.\\n\\n3.ProteinKG25 is used as the KG, but the model is evaluated on six representative tasks, it is unclear how the entities of these datasets are linked to the knowledge graph.\\n\\n4.Where is Table 7? If I missing \\n\\n5.From many experimental results (e.g., Table 3 and Table 6), we can see that KeAP has achieved comparable performance to Kara. Please describe the difference between the two methods in detail, and be curious about the complexity and number of parameters of the two methods.\\n\\n6.The experimental design is good, however there are one limitations that preclude the reader to understand how generalizable the method is. Only one protein embedding method (ProtBert) is tested for the pre-trained embeddings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to question (3) and question (4).\", \"comment\": \"``` Q3. ProteinKG25 is used as the KG, but the model is evaluated on six representative tasks, it is unclear how the entities of these datasets are linked to the knowledge graph.```\\n\\nThanks for your kind comment. __First, samples (i.e., proteins) in downstream tasks are entirely unseen in the pre-training KG.__ As we have mentioned in Section 2, each piece of knowledge in ProteinKG25 is represented as a triple (protein, relation, gene ontology annotation), where each protein is associated with its amino acid sequence. As we have mentioned in Appendix B, __each sample in downstream datasets is an amino acid sequence of a protein associated with the task-specific label.__ Moreover, as we have mentioned in lines 710-712 on Page 14, __amino acid sequences that appear in downstream tasks are removed from the knowledge graph, making them entirely unseen during pre-training. This inductive setting ensures that our downstream task evaluations can faithfully reflect the model\\u2019s generalization ability to new proteins.__\\n\\nSecond, as we have described in Section 3.2.1, __we introduce a novel knowledge retriever that predicts potential gene annotations and their relationships for new proteins, effectively linking each new protein to the knowledge graph__ (e.g., knowledge retriever gets new protein p_u as input and outputs its potential knowledge (p_u, r_k, go_k), where r_k and go_k have existed in KG). Since proteins from downstream tasks do not exist in the knowledge graph, there is no relevant knowledge available for use during task inference (a limitation of KeAP and Ontoprotein). __The proposed knowledge retriever allows Kara to extract relevant knowledge and similar proteins from the knowledge graph as contextual information for inference on new proteins, overcoming this limitation.__\\n\\n&nbsp;\\n\\n``` Q4. Where is Table 7? If I missing```\\n\\nThanks for your kind comment. __Table 7 is at the top of page 9, lines 433-436.__\"}", "{\"summary\": \"The paper introduces Kara that uses information from protein knowledge graphs to improve protein language models. Kara directly injects relevant knowledge during both pre-training and fine-tuning phases, utilizing a knowledge retriever that predicts gene descriptions for new proteins. Kara involves introduction of several key components: Contextualized Virtual Tokens that fuse knowledge and structural information into protein representations; Knowledge injection both at post-training and fine-tuning stages; Retrieval of relevant proteins and graph information with a dense retriever.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The performance improves on most tasks (following the same experiment tasks and settings as Ontoprotein) compared to Ontoprotein and KeAP.\", \"The encoding style of Kara combines strengths of Ontoprotein and KeAP: Ontoprotein uses contrastive pretraining to first obtain structure-intensive graph embedding and then inject into language model, while KeAP direct encodes related knowledge in tuples with language encoder. Differently, Kara encodes 1-hop GO entity as knowledge, and 2-hop entities as structure to provide more detailed graph knowledge for the protein language model.\", \"The knowledge retriever maps new protein sequence to GO entities, which could make it possible to generalize to proteins not directly covered by the knowledge graph.\"], \"weaknesses\": \"1. My major concern of this work is its technical contributions, which closely follows OntoProtein and KeAP. The main improvement of Kara compares to Ontoprotein and KeAP is that it encodes both structural information (relations in GO) and knowledge (knowledge stored in each triple) within the contextualized virtual tokens. Ontoprotein uses the same pipeline to encode protein knowledge graph and inject embeddings into the language model, so the technical contributions are minor.\\n\\n2. The structural regularization (Eq. 6) obtained from two-hop entities might be weak or misleading. ProteinKG25 is a sparse knowledge graph and entities involve not only proteins as well as biological processes and molecular functions. What is the percentage of proteins that have 2-hop protein neighbors and are the neighbors all functional similar ? Neighbors may not be similar proteins but could be proteins that could interact with each other. Their function may not be similar.\", \"questions\": \"1. Protein downstream tasks often require different kinds of knowledge, e.g. PPI requires knowledge about the functions and relations of the two proteins, contact prediction requires evolutionary and structural knowledge. Wonder if the authors could further provide insights on how knowledge & structural information differentiate across tasks. For example, why introducing more graph structural knowledge could improve the performance on contact prediction.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to weaknesse (1).\", \"comment\": [\"``` W1. My major concern of this work is its technical contributions, which closely follows OntoProtein and KeAP. The main improvement of Kara compares to Ontoprotein and KeAP is that it encodes both structural information (relations in GO) and knowledge (knowledge stored in each triple) within the contextualized virtual tokens. Ontoprotein uses the same pipeline to encode protein knowledge graph and inject embeddings into the language model, so the technical contributions are minor.```\", \"Thank you for your kind comment. We would like to clarify that __Kara has a completely distinct architecture compared to KeAP and OntoProtein.__ Kara\\u2019s novel pipeline contains three main components: contextualized virtual token, structure-based regularization, and knowledge retriever (none of which are present in KeAP and OntoProtein). __These components enable Kara to use precise knowledge information, integrate protein function similarities, and adapt to knowledge updates in KG (where KeAP and OntoProtein face limitations).__ We detail their differences as follows:\", \"From how to integrate knowledge into language models.\", \"__KeAP and OntoProtein implicitly embed knowledge within the parameters of the language model.__ Specifically, during pre-training, they first use the protein language model to encode a protein's amino acid sequence into an embedding. Then, KeAP uses another transformer-based decoder to receive knowledge and encoded embeddings to predict masked amino acid tokens. OntoProtein uses a TransE objective to train the embedding of each protein to be closer to its related knowledge in the embedding space. They propose that these knowledge-guided masked language modeling approach helps retain knowledge within the model parameters. However, as we have discussed in lines 50\\u201367 on Page 1, __language models often struggle to retain knowledge precisely. Additionally, they process each piece of knowledge independently, failing to integrate the complete knowledge context of proteins.__\", \"__Kara directly uses knowledge of each protein as a part of the language model's input.__ As described in Section 3.1.1, Kara summarizes 1-hop neighbors of a protein (gene descriptions) as \\\"knowledge virtual tokens\\\" and 2-hop neighbors (functionally similar proteins) as \\\"structure virtual tokens.\\\" These virtual tokens are then concatenated with the amino acid sequence to form the model input. __This approach not only can input precise knowledge information to the language model, but also provides a broader knowledge context by leveraging neighboring information.__\", \"From how to pre-train the language model.\", \"__KeAP employs a decoder to receive knowledge and encoded embeddings to predict masked amino acid tokens. OntoProtein uses a TransE objective to train the embedding of each protein to be closer to its related knowledge in the embedding space.__ However, the transformer-based decoder introduces significant training complexity and a large number of parameters. Additionally, Their pre-training overlooks the protein relevance provided by the KG structure, leading to insufficient knowledge exploitation.\", \"__Kara predicts masked amino acid tokens directly using the protein language model with the prompt of virtual tokens.__ This eliminates the need for a decoder, reducing both training complexity and parameter size. Furthermore, as discussed in Section 3.1.2, __Kara is also trained to embed the functionally similar proteins closer together in embedding space, integrating high-order graph structural relevance (i.e., functional similarity) into protein representations.__\", \"From how to encode new proteins.\", \"Since KeAP and OntoProtein assume that knowledge has been embedded within parameters of the language model, __they directly input the amino acid sequence of new protein into the pre-trained language model to get its embedding, which suffers from imprecise knowledge information, and fail to adapt to knowledge updates.__\", \"__Kara proposes a novel knowledge retriever to retrieve related knowledge for each new protein,__ and then summarizes the retrieved knowledge as virtual tokens to input into the language model, which __can integrate precise knowledge information into the protein language model.__ Moreover, any updates of the related knowledge of a protein can be perceived by the knowledge retriever during retrieving, and then integrated during encoding via the virtual tokens, __ensuring that Kara can always use the most current knowledge for encoding.__\"]}", "{\"summary\": \"Instead of implicitly modeling knowledge information, the paper proposes knowledge-aware retrieval-augmented protein language model (Kara), which enables consistent knowledge-augmented modeling when applied to downstream protein-related tasks. During the pretraining stage, the authors try to model the structural information in the protein knowledge graph, such as neighboring and high-order connectivity. In the fine-tuning stage, a knowledge retriever is used to bridge the optimization gap between pretraining and fine-tuning, allowing for seamlessly adapting to knowledge updates.\\nThe authors conduct extensive experiments and demonstrate that this unified knowledge modeling process consistently outperforms existing knowledge-enhanced models across six protein-related tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The method effectively carries the external knowledge injected during pretraining into downstream, significantly mitigating catastrophic forgetting. Additionally, the knowledge retrieval process not overly complex.\\n2. The proposed relation-GO combinations further enhance retriever\\u2019s ability to recall the informative knowledge.\\n3. The authors demonstrate the methods\\u2019 effectiveness across multiple tasks and conduct thorough ablation studies, such as the effect of without the neighboring information during inference.\\n4. The paper is well-written and clear\", \"weaknesses\": \"1. If the protein belongs to an under-studied or new protein family, does this retrieval method have certain limitations, especially when these proteins have very low sequence identity to known (trained) proteins? It would be better to include experiments on under-studied proteins to demonstrate, possibly by a simulated way that splitting clusters of low-identity proteins into training and validation sets.\\n2. Further, does the method have potential to uncover patterns of new proteins and their associations with existing?\", \"questions\": \"I have listed my questions in weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer 4Ji4:\\n\\nWe would like to know if our response has addressed your concerns and questions. If you have any further concerns or suggestions for the paper or our rebuttal, please let us know. We would be happy to engage in further discussion and manuscript improvement. Thank you again for the time and effort you dedicated to reviewing this work.\"}", "{\"comment\": \"``` W1. If the protein belongs to an under-studied or new protein family, does this retrieval method have certain limitations, especially when these proteins have very low sequence identity to known (trained) proteins? It would be better to include experiments on under-studied proteins to demonstrate, possibly by a simulated way that splitting clusters of low-identity proteins into training and validation sets.```\\n\\nThank you for your thoughtful comment. To evaluate the generalization ability of the knowledge retriever on under-studied proteins, we employ a new data-splitting strategy. First, we randomly divide the triples (i.e., (protein, relation, go)) into training and testing sets in an 8:2 ratio. Next, we remove any triple (p_i, r_i, go_i) from the training set if go_i appears in any test triples. This ensures that the knowledge (e.g., gene descriptions) associated with test proteins is entirely absent from the training set, and thus unlearnable during training. __This splitting method simulates under-studied proteins who have functions and gene descriptions not been observed before.__ The results, presented in the following table, demonstrate that __our knowledge retriever can generalize to these proteins.__ Additionally, fine-tuning the last three layers of the PubMedBert encoder during training further improves its performance, __highlighting its potential to generalize to unseen gene descriptions through domain-specific fine-tuning.__\\n\\n| Models | Hits@ 1 | Hits@ 3 | Hits@ 10 |\\n| :-----:| :-----:| :----: | :----: |\\n| without PubMedBert fine-tuning| 0.430 | 0.608 | 0.796 |\\n| with PubMedBert fine-tuning| 0.495 | 0.683 | 0.859 |\\n\\n&nbsp;\\n\\n``` W2. Further, does the method have potential to uncover patterns of new proteins and their associations with existing? ```\\n\\nThank you for your thoughtful comment. Our knowledge retriever can predict potential knowledge triplet for new proteins (i.e., gets new protein p_u as input and outputs its potential knowledge (p_u, r_k, go_k), where r_k and go_k have existed in KG). This enables the new protein to be linked to the KG, __therefore revealing potential functions for new proteins. Additionally, paths between the new protein and existing proteins can illustrate their relevance.__ For example, knowledge (p_u, r_k, go_k) and (p_j, r_j, go_k) can form a path (r_k, go_k, r_j), illustrating the roles of two proteins in a shared biological activity.\"}", "{\"title\": \"Response to weakness (2) and question (1).\", \"comment\": \"``` W2. The structural regularization (Eq. 6) obtained from two-hop entities might be weak or misleading. ProteinKG25 is a sparse knowledge graph and entities involve not only proteins as well as biological processes and molecular functions. What is the percentage of proteins that have 2-hop protein neighbors and are the neighbors all functional similar ? Neighbors may not be similar proteins but could be proteins that could interact with each other. Their function may not be similar. ```\\n\\nThank you for your kind comment. First, we would like to clarify that __Kara does not simply consider 2-hop connected proteins as functionally similar.__ As stated on lines 158-160 on Page 3 and lines 228-230 on Page 5, two proteins are considered functionally similar if they are connected to the same GO entity through the same relation. \\n\\nSecond, __the structure of ProteinKG25 ensures that proteins selected by the above strategy are functionally similar.__ As we have introduced in Section 2, each piece of knowledge in ProteinKG25 is represented as a triple (protein, relation, gene ontology annotation), where __each gene ontology annotation is a statement about the function of a particular gene or gene product__, e.g., the gene product \\u201ccytochrome c\\u201d can be described by the molecular function oxidoreductase activity, and the relation describes the relationship between a protein and a gene function, such as \\u201cenables\\u201d and \\u201cinvolved in\\u201d. Therefore, each piece of knowledge in ProteinKG25 describes the role of a protein in biological activity. __This means that two proteins connecting with a gene ontology through the same relation will serve the same role in biological activities, and thus have similar functions.__\\n\\nThird, our statistic on ProteinKG25 shows that __99% of proteins in this ProteinKG 25 have at least one functionally similar protein (i.e., two-hop connected through the same relation), which is sufficient for Kara to train.__\\n\\n``` Q1.Protein downstream tasks often require different kinds of knowledge, e.g. PPI requires knowledge about the functions and relations of the two proteins, and contact prediction requires evolutionary and structural knowledge. Wonder if the authors could further provide insights on how knowledge & structural information differentiate across tasks. For example, why introducing more graph structural knowledge could improve the performance on contact prediction. ```\\n\\nThank you for your kind comment. First, we would like to clarify that __the purpose of incorporating the KG is not to bring specific knowledge for each task. Instead, it is to infuse general biology knowledge into protein language models, making protein representations be more discriminative in the embedding space, and thus achieving performance improvement on various downstream tasks.__ Integrating KG offers two benefits for enhancing protein embeddings:\\n\\n- __Multi-modal information integration:__ Traditional protein language models rely solely on amino acid sequences, providing only the information of \\u201cwhat is this protein composed of\\u201d. __In contrast, the textual knowledge in the KG provides high-level insights into a protein's role in biological activities, which amino acid sequences cannot reveal.__ Incorporating this KG therefore enables representing proteins from different perspectives, resulting in more comprehensive and higher-quality protein embeddings.\\n\\n- __Protein relevance indication:__ The KG structure highlights functional similarities among proteins, (i.e., if two proteins connect to the same gene ontology annotation through identical relations, this means they have the same biological roles, and thus share similar functions). __An ideal protein language model would embed functionally similar proteins closer together. However, traditional protein language models only use amino acid sequences, and miss these functional similarities, leading to less precise embeddings.__\"}", "{\"summary\": \"The paper introduces the Kara model, which integrates protein knowledge graphs (PKG) directly into protein language models (PLM) to enhance understanding of biological functions encoded in protein sequences.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"It uses a novel knowledge retriever to predict gene descriptions for new proteins during both pre-training and fine-tuning stages, which helps in aligning with PKGs and improves knowledge retention.\\n\\nThese tokens enable token-level integration of knowledge and structural information into protein representations, enhancing the model\\u2019s ability to handle high-order knowledge.\", \"weaknesses\": \"The performance of the model heavily relies on the quality and the extent of the PKGs used, which might limit its application if relevant knowledge graphs are incomplete or outdated.\\n\\nWhile the model shows improvements in task-specific contexts, its ability to generalize across broader protein types or different biological conditions remains uncertain.\", \"questions\": \"see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer iHdH:\\n\\nWe would like to know if our response has addressed your concerns and questions. If you have any further concerns or suggestions for the paper or our rebuttal, please let us know. We would be happy to engage in further discussion and manuscript improvement. Thank you again for the time and effort you dedicated to reviewing this work.\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear reviewer iHdH:\\n\\nAs we approach the end of the discussion period, please let us know if you have any thoughts regarding our above comment addressing your concerns. We thank you very much for your efforts thus far.\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"Thank you for raising the soundness score. We respond to your further concerns as follows.\\n\\n## __Novelty and contributions.__\\nAlthough using knowledge to enhance the protein language model is a very intuitive idea, __how to use KG is a more critical problem for practical usage, since many challenges exist in real-world use cases:__\\n- KGs are consistently updated in the real world, how to avoid the model using outdated knowledge? \\n- Many newly observed proteins are under-studied and thus do not exist in KG, how can we generalize the model to these under-studied proteins? \\n- Usually, we need to fine-tune the model to adapt to various downstream applications. How can we ensure the knowledge learned during pre-train is not to be catastrophically forgotten during fine-tuning?\\n\\nThese challenges are very serious for a practical\\u00a0KG-enhanced protein language model, but they remain overlooked by previous works (OntoProtein and KeAP). __Therefore, the unique insights provided by our work lie in two parts:__\\n- __Points out these critical real-world challenges overlooked by previous works (lines 050-066).__\\n- __Providing several verified technical designs to solve the above challenges (i.e., knowledge retriever, structure-based regularizations, and knowledge and structure virtual tokens).__\\n\\n&nbsp;\\n\\n## __Differences to previous KG-augmented methods.__\\nFirst, some previous works also __use virtual tokens to incorporate knowledge and structural information [1, 2, 3, 4]__. They typically assume every encoding objective has existed in KG and the knowledge information can be directly extracted after matching corresponding entities. __However, in the protein-encoding scenario, many under-studied proteins do not exist in KG, making the previous \\u201cmatching and extracting\\u201d strategy not work.__ To tackle this challenge, we propose a novel knowledge retriever to predict gene descriptions for new proteins, which __enables our model to generalize to unseen encoding objectives (where previous work fell short).__\\n\\nSecond, some recent methods also __propose using a retriever to find related entities from KG to enhance LLM generation [5, 6]. However, they are designed for general KGs and cannot handle unique challenges for protein knowledge graphs.__ Specifically, protein KGs contain two types of entities with different modality information, requiring the retrieval process to consider multi-modal information alignment. Additionally, it contains a large amount of different textual gene descriptions, bringing a large candidate space with complex semantics. __Our knowledge retriever is specially designed to solve these challenges with multi-modal matching loss and relation-go combination strategies.__\\n\\nThird, some previous works also __integrate knowledge and structural information within KGs into training objectives [7, 8, 9, 10, 11].__ They are typically designed for document encoding where the entities in KG are words that appeared in documents. They use structural information to assign mask possibilities for different words during masked language modeling or train the model to predict graph neighbors. __However, in the protein-encoding scenario, both the encoding objective and entities in KG are protein sequences, making previous training strategies not work. Moreover, they only predict one-hop neighbors, which overlooked high-order structural relevance in their objective functions. However, high-order relevance is important for protein encoding since it indicates the functional similarity between proteins.__\\n\\u00a0\\n\\nIn summary, our model is not an oversimplified pipeline that \\u201cIncorporating multi-hop encoded structural information alongside knowledge information\\u201d. __It contains several unique technical designs (e.g., knowledge retriever and structure-based regularizations) to solve special challenges in protein encoding scenarios, making it different from previous methods.__\\n\\n[1]DKPLM: Decomposable Knowledge-Enhanced Pre-trained Language Model for Natural Language Understanding. 2022\\n\\n[2]ERNIE 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. 2021\\n\\n[3]Making Large Language Models Perform Better in Knowledge Graph Completion 2024\\n\\n[4]Edgeformers: Graph-empowered transformers for representation learning on textual-edge networks 2023\\n\\n[5]G-retriever: Retrieval-augmented generation for textual graph understanding and question answering 2024\\n\\n[6]GRAG: Graph Retrieval-Augmented Generation 2024\\n\\n[7]Exploiting structured knowledge in text via graph-guided representation learning 2020\\n\\n[8]Knowledge-aware language model pretraining 2020\\n\\n[9]KEPLER: A unified model for knowledge embedding and pre-trained language representation 2021\\n\\n[10]Pre-training language models with deterministic factual knowledge 2022\\n\\n[11]Unifying Large Language Models and Knowledge Graphs: A Roadmap 2024\"}", "{\"metareview\": \"This paper proposes Kara, which makes efforts in integrating protein knowledge graphs (KGs) into protein language models. The proposed Kara model's use of a knowledge retriever to predict gene descriptions for new proteins, especially during pre-training and fine-tuning. This mechanism helps in aligning with PKGs, retaining knowledge, and generalizing to new proteins. The main concerns of this paper lie in the machine learning novelty of this paper, i.e., Kara closely follows OntoProtein and KeAP. The main improvement seemed minor as it mainly encodes both structural information and knowledge within contextualized virtual tokens. The authors should revise this paper and show that why the proposed method is quite distinct from previous work in both intuitive and empirical ways, and explicitly verify why directly uses knowledge as part of the input works better?\", \"additional_comments_on_reviewer_discussion\": \"The main concerns of this paper lie in the machine learning novelty of this paper, i.e., Kara closely follows OntoProtein and KeAP. The authors argued that Kara has a distinct architecture with components like contextualized virtual token, structure - based regularization, and knowledge retriever, which are absent in KeAP and OntoProtein. They detailed differences in how knowledge is integrated into language models, pre - trained, and new proteins are encoded. For instance, Kara directly uses knowledge as part of the input, while KeAP and OntoProtein implicitly embed knowledge within model parameters. However, authors should revise this paper and show that why the proposed method is quite distinct from previous work in both intuitive and empirical ways, and explicitly verify why directly uses knowledge as part of the input works better?\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"I have read the response and the other reviewers' comments carefully. I would appreciate hearing any insights from the other reviewers.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer XfFc:\\n\\nWe would like to know if our response has addressed your concerns and questions. If you have any further concerns or suggestions for the paper or our rebuttal, please let us know. We would be happy to engage in further discussion and manuscript improvement. Thank you again for the time and effort you dedicated to reviewing this work.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your clarifications.\\n\\nMy concern regarding **Weakness 2** is resolved. The results showing \\\"99% 2-hop proteins\\\" make the method appear more reasonable. I have therefore updated the soundness score accordingly.\\n\\nHowever, I remain concerned about the **novelty and contributions** of this paper\\u2014particularly, the **new insights** it provides into protein understanding, which is especially crucial in the context of machine learning for proteins, e.g. OntoProtein illustrates how external knowledge base could be used, ESM illustrates the importance of pretraining, ProtST emphasizes multimodal encoding. While improving model architecture and achieving better results are valid contributions, the insights from this work do not seem to differ from those presented in KeAP and OntoProtein. Consequently, I believe this paper does not meet the acceptance threshold, and I would like to maintain my initial ratings.\\n\\nAdditionally, I have a question: has a similar method been previously applied to KG-augmented language models in other application domains? Incorporating multi-hop encoded structural information alongside knowledge information seems like a relatively straightforward approach to encoding graph knowledge. As I am not familiar with this line of work, I would appreciate hearing any insights from the authors or other reviewers.\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer JTXJ:\\n\\nWe would like to know if our response has addressed your concerns and questions. If you have any further concerns or suggestions for the paper or our rebuttal, please let us know. We would be happy to engage in further discussion and manuscript improvement. Thank you again for the time and effort you dedicated to reviewing this work.\"}", "{\"comment\": \"```W1. The performance of the model heavily relies on the quality and the extent of the PKGs used, which might limit its application if relevant knowledge graphs are incomplete or outdated.```\\n\\nThank you for your kind comment. The following Table shows the performance of Kara using partial ProteinKG25, simulating different levels of KG incompleteness (i.e., randomly selecting 70% and 50% of triples). We can see that __Kara consistently outperforms the SOTA knowledge-enhanced models KeAP and OntoProtein (trained with full ProteinKG25) with different KG incompleteness, highlighting its robustness.__ Such robustness comes from not only the neighbor sampling strategy during pre-training, which simulates the noise of incompleteness and enforces the encoder to be robust to such noise (lines 785-786 on Page 15), but also the introduction of structure virtual tokens that enrich the sparse knowledge context of proteins by integrating the information of their functional-similar proteins (Section 3.1.1 on Page 3).\\n\\n| Models | Concate (6 \\u2264 seq \\u2264 12) | Homology | Stability | Affinity (lower is better) |\\n| :-----:| :-----:| :----: | :----: | :----: |\\n| OntoProtein (full KG)| 0.460 | 0.240 | 0.750 | 0.590 |\\n| KeAP (full KG)| 0.510 | 0.290 | 0.800 | 0.520 |\\n| Kara (50% KG)| 0.540| 0.316 | 0.823 | 0.511 |\\n| Kara (70% KG)| 0.546 | 0.322 | 0.828 | 0.503 |\\n\\n\\nAdditionally, we would like to emphasize that __a key advantage of Kara is its ability to seamlessly adapt to knowledge updates.__ Knowledge graphs in the real world are inevitably incomplete or outdated, making KGs frequently updated (e.g., add new knowledge or remove outdated knowledge). As noted in lines 50-54 on Page 1, KeAP and Ontoprotein utilize pre-training to embed knowledge graph information within the language model parameters, and then __inference relies on this preserved static knowledge, making them unable to use knowledge updated after the pre-training stage.__ Instead, as noted in lines 254-259 on Page 5 and lines 304-307 on Page 6, Kara directly uses knowledge as encoder input. It first accesses the latest version of KG via the proposed knowledge retriever to retrieve related knowledge. Then it inputs the knowledge into the encoder by constructing them as the contextualized virtual tokens. Therefore, __any updates of the related knowledge of a protein can be perceived by the knowledge retriever during retrieving, and then integrated during encoding via the contextualized virtual tokens, ensuring that Kara can always use the most current knowledge for inference.__ \\n\\n&nbsp;\\n\\n```W2. While the model shows improvements in task-specific contexts, its ability to generalize across broader protein types or different biological conditions remains uncertain.```\\n\\nThanks for your kind comment. First, we would like to clarify that __all of our downstream task evaluations are conducted in an inductive setting.__ Specifically, proteins that appear in downstream tasks are removed from the pre-training knowledge graph (see lines 710-712, Page 14). __As a result, the proteins used in testing are entirely unseen during pre-training, ensuring that our downstream task evaluations can faithfully reflect the model\\u2019s generalization ability to new proteins.__\\n\\nSecond, as noted in Appendix B on Page 14, our experiments involve testing proteins collected from a variety of public datasets, such as ProteinNet, SKEMPI, and TAPE. __These datasets encompass a wide range of protein types__, including antibodies, enzymes, antiporters, truncated hemoglobins, chaperone proteins, G protein-coupled receptors, activin receptors, and glycopeptides, among others. __Furthermore, these proteins are not restricted to a single biological condition__; they come from diverse organisms, including Escherichia coli, Sus scrofa (pig), Homo sapiens (human), Mycobacterium bovis, etc. __As shown in Figure 1, Kara consistently outperforms SOTA models (i.e., KeAP and ESM-2) across all tasks, underscoring its superior generalization capability across diverse protein types and unseen proteins.__\"}", "{\"title\": \"Response to the weaknesses, question (1), and question (2).\", \"comment\": \"```W1. The Introduction needs to provide more background information, such as the specific role of Knowledge Graphs (KGs) in this context, the benefits they offer, and the rationale behind exploring KG-based methods.```\\n\\nThank you for your kind comment. In ProteinKG25, __each piece of knowledge is represented as a triple (protein, relation, gene ontology annotation) that describes the role of a protein in biological activities.__ Integrating this KG offers two benefits for enhancing protein embeddings:\\n\\n- __Multi-modal information integration:__ Traditional protein language models rely solely on amino acid sequences, providing only the information of \\u201cwhat is this protein composed of\\u201d. __In contrast, the textual knowledge in the KG provides high-level insights into a protein's role in biological activities, which amino acid sequences cannot reveal.__ Incorporating this KG therefore enables representing proteins from different perspectives, resulting in more comprehensive and higher-quality protein embeddings.\\n\\n- __Protein relevance indication:__ The KG structure highlights functional similarities among proteins, (i.e., if two proteins connect to the same gene ontology annotation through identical relations, this means they have the same biological roles, and thus share similar functions). __An ideal protein language model would embed functionally similar proteins closer together. However, traditional protein language models only use amino acid sequences, and miss these functional similarities, leading to less precise embeddings.__\\n\\nWe will add these points as a new paragraph in the introduction of the revised version of our paper.\\n\\n&nbsp;\\n\\n```Q1. How is the ProteinKG25 knowledge graph selected? There are many other well-known protein-related multi-omics knowledge graphs, such as PharmKG (Briefings in bioinformatics, 2021), CKG (Nature biotechnology, 2022). Do the types and numbers of entities and relationships affect model performance?```\\n\\nThank you for your kind comment. __First, we chose ProteinKG25 to ensure a fair comparison with baselines, as it is widely used by existing knowledge-enhanced protein language models (e.g., KeAP and OntoProtein).__\\n\\n__Second, most multi-omics knowledge graphs lack the essential information required for pre-training protein language models.__ For instance, while PharmKG contains entities like diseases, genes, and medicines, it does not include protein entities. Although CKG includes protein entities, it does not annotate them with their amino acid sequences.\\n\\nAdditionally, the table below presents Kara's performance using subsets of ProteinKG25, simulating different numbers of entities and relationships (by randomly selecting 70% and 50% of entities and relationships). __Kara consistently outperforms KeAP and OntoProtein (which use the full ProteinKG25) across these scenarios, demonstrating its robustness.__\\n\\n| Models | Concate (6 \\u2264 seq \\u2264 12) | Homology | Stability | Affinity (lower is better) |\\n| :-----:| :-----:| :----: | :----: | :----: |\\n| OntoProtein (full KG)| 0.460 | 0.240 | 0.750 | 0.590 |\\n| KeAP (full KG)| 0.510 | 0.290 | 0.800 | 0.520 |\\n| Kara (50% entities and relations)| 0.535| 0.312 | 0.819 | 0.513 |\\n| Kara (70% entities and relations)| 0.542 | 0.317 | 0.824 | 0.505 |\\n\\n&nbsp;\\n\\n```Q2. The knowledge graph contains only positive samples for interaction-based tasks. Did the authors incorporate negative sampling during training? If so, please provide additional details on how this was implemented.```\\n\\nThank you for your feedback. __First, to clarify, the knowledge graph (KG) in our work does not contain any task-specific samples.__ As explained in Section 2, each knowledge in ProteinKG25 is represented as a triple\\u2014(protein, relation, gene ontology annotation)\\u2014which describes a protein's role in biological activities or gene functions. __The purpose of incorporating the KG is not to increase training samples for downstream tasks, but to infuse general biological knowledge into language models.__\\n\\nSecond. We incorporate negative sampling during pretraining for structure-based regularization, and __we have described it and detailed its implementation in lines 228-234 on Page 5 and lines 786-787 on Page 15.__ To clarify the negative sampling method used in Kara, we provide further details below:\\n\\n- In ProteinKG25, __two proteins connecting with a gene ontology annotation through the same relation will have the same role in biological activities, and thus have similar functions. We propose structure-based regularization to force proteins with similar functions to be closer together in embedding space__ - an aspect overlooked by previous works.\\n\\n- Specifically, we define positive samples as protein pairs (p_i, p_j), where p_i and p_j share at least one similar function, (i.e., both (p_i, r_k, go_k) and (p_j, r_k, go_k) exist in KG). Conversely, __if protein p_i and p_m have no shared (r, go) combinations in KG, (p_i, p_m) will be regarded as a negative sample.__\"}", "{\"title\": \"Response to question (5).\", \"comment\": [\"```5. From many experimental results (e.g., Table 3 and Table 6), we can see that KeAP has achieved comparable performance to Kara. Please describe the difference between the two methods in detail, and be curious about the complexity and number of parameters of the two methods.```\", \"Thanks for your kind comment. We provide a detailed comparison between our Kara and KeAP models in terms of architecture, complexity, and parameter numbers.\", \"## __Model Architecture__\", \"From how to integrate knowledge into language models.\", \"__KeAP implicitly embeds knowledge within the parameters of the language model.__ Specifically, during pre-training, it uses the protein language model to encode a protein's amino acid sequence into an embedding. A transformer-based decoder then takes this embedding along with related knowledge to predict masked amino acid tokens. KeAP proposes that this knowledge-guided pre-training approach helps retain knowledge within the model parameters. However, as we have discussed in lines 50\\u201367 on Page 1, __language models often struggle to retain knowledge precisely. Additionally, KeAP processes each piece of knowledge independently, failing to integrate the complete knowledge context of proteins.__\", \"__Kara directly uses knowledge of each protein as a part of the language model's input.__ As described in Section 3.1.1, Kara summarizes 1-hop neighbors of a protein (gene descriptions) as \\\"knowledge virtual tokens\\\" and 2-hop neighbors (functionally similar proteins) as \\\"structure virtual tokens.\\\" These virtual tokens are then concatenated with the amino acid sequence to form the model input. __This approach not only can input precise knowledge information to the language model, but also provides a broader knowledge context by leveraging neighboring information.__\", \"From how to pre-train the language model.\", \"__KeAP employs a decoder to predict masked amino acid tokens using knowledge input and embeddings encoded by protein language model.__ However, the transformer-based decoder introduces __significant training complexity and a large number of parameters.__ Additionally, KeAP's pre-training __overlooks the protein relevance provided by the KG structure__, leading to insufficient knowledge exploitation.\", \"__Kara predicts masked amino acid tokens directly using the protein language model with the prompt of virtual tokens.__ This eliminates the need for a decoder, reducing both training complexity and parameter size. Furthermore, as discussed in Section 3.1.2, __Kara is also trained to embed the functionally similar proteins closer together in embedding space__, integrating high-order graph structural relevance (i.e., functional similarity) into protein representations.\", \"From how to encode new proteins.\", \"Since KeAP assumes that knowledge has been embedded within parameters of the language model, __they directly input the amino acid sequence of new protein into the pre-trained language model to get its embedding, which suffers from imprecise knowledge information, and fails to adapt to knowledge updates.__\", \"Kara proposes a novel knowledge retriever, __retrieving related knowledge for each new protein and summarizing the retrieved knowledge as virtual tokens to input into the language model__, which can integrate precise knowledge into the protein language model. Moreover, any updates of the related knowledge of a protein can be perceived by the knowledge retriever during retrieving, and then integrated during encoding via the virtual tokens, __ensuring that Kara can always use the most current knowledge for encoding.__\", \"## __Complexity__\", \"Due to the incorporation of a transformer-based decoder, the additional time complexity of KeAP compared to vanilla protein language models is O(|S|^2 * d), where |S| is the length of protein amino acid sequence (typically > 500), and d is the embedding hidden size (usually 768 or 1024).\", \"As we have mentioned in lines 317-232 on Page 6, Kara's additional time complexity, compared to vanilla protein language models, arises only from the virtual tokens (increasing from O(|S|^2 * d) to O((|S|+2)^2 * d)) and the retrieval process ( O(|R * k|) ), where |R * k| is much smaller than |S|^2 * d. __Therefore, the time complexity of Kara is much smaller than KeAP.__\", \"## __Parameter number__\", \"For KeAP, the incorporation of a transformer-based decoder brings a large number of parameters, including Q,K,V, and O weight matrices for n heads, the MLP for the multi-head mechanism, layer normalization, etc.\", \"The additional parameter of Kara only comes from four projection matrices: MLP_knowledge, MLP_struture, MLP_G, and MLP_P, __which is much smaller than KeAP.__\"]}", "{\"title\": \"Response to question (6).\", \"comment\": \"``` Q6. The experimental design is good, however there are one limitations that preclude the reader to understand how generalizable the method is. Only one protein embedding method (ProtBert) is tested for the pre-trained embeddings.```\\n\\nThanks for your thoughtful comment. The following table shows the performance of Kara using different protein embedding methods (PortBert [1], ProteinBert [2], and ESM-1b [3]). __We can see that Kara with different pre-trained embeddings can consistently outperform SOTA models, showcasing its generalization ability.__\\n\\n| Models | Concate (6 \\u2264 seq \\u2264 12) | Homology | Stability | Affinity (lower is better) |\\n| :-----:| :-----:| :----: | :----: | :----: |\\n| OntoProtein| 0.460 | 0.240 | 0.750 | 0.590 |\\n| KeAP | 0.510 | 0.290 | 0.800 | 0.520 |\\n| Kara (ProtBert)| 0.553 | 0.323 | 0.830 | 0.501 |\\n| Kara (ProteinBert)| 0.556 | 0.318 | 0.824 | 0.506 |\\n| Kara (ESM-1b)| 0.563 | 0.327 | 0.833 | 0.510 |\\n\\n[1] Prottrans: towards cracking the language of life\\u2019s code through self-supervised deep learning and high performance computing\\n\\n[2] ProteinBERT: A universal deep-learning model of protein sequence and function\\n\\n[3] Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences.\"}" ] }
0Lpz2o6NDE
Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models
[ "Jingzhi Bao", "Xueting Li", "Ming-Hsuan Yang" ]
3D meshes are widely used in computer vision and graphics because of their efficiency in animation and minimal memory footprint. They are extensively employed in movies, games, AR, and VR, leading to the creation of a vast number of mesh sequences. However, creating temporally consistent and realistic textures for these mesh sequences remains labor-intensive for professional artists. On the other hand, video diffusion models have demonstrated remarkable capabilities in text-driven video generation, enabling users to create countless video clips based solely on their imagination. Despite their strengths, these models often lack 3D geometry awareness and struggle with achieving multi-view consistent texturing for 3D mesh sequences. In this work, we present Tex4D, a zero-shot approach that integrates inherent 3D geometry knowledge from mesh sequences with the expressiveness of video diffusion models to produce multi-view and temporally consistent 4D textures. Given an untextured mesh sequence and a text prompt as inputs, our method enhances multi-view consistency by synchronizing the diffusion process across different views through latent aggregation in the UV space. To ensure temporal consistency, we leverage prior knowledge from a conditional video generation model for texture synthesis. However, straightforwardly combining the video diffusion model and the UV texture aggregation leads to blurry results. We analyze the underlying causes and propose a simple yet effective modification to the DDIM sampling process to address this issue. Additionally, we introduce a reference latent texture to strengthen the correlation between frames during the denoising process. To the best of our knowledge, Tex4D is the first method specifically designed for 4D scene texturing. Extensive experiments demonstrate its superiority in producing multi-view and multi-frame consistent videos based on untextured mesh sequences.
[ "4D texture synthesis", "consistent video generation", "zero-shot" ]
Reject
https://openreview.net/pdf?id=0Lpz2o6NDE
https://openreview.net/forum?id=0Lpz2o6NDE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z28BAI2NWE", "wavOGNiB1b", "uecejpw6N5", "uLy8pSbxVO", "hKPrsUKcJZ", "gbxBzC2P8g", "Yhr8RwLm4c", "YTl6kzdKZA", "Y0M3KqpXRR", "QqBWeoeeTM", "Qdp1qNaA29", "PMWCYCkjXM", "O8GueGcyZS", "NbgK6xMkkg", "LHduqAF0kv", "Kq9MwdXF8A", "GtsgDNhYsr", "EaR4KbaSh6", "D7V4iYRcVQ", "CNBdpGiMQs", "BXKvo3quiB", "9sBgJ0YmBC", "9SYCp7evfi", "7DpvbHV6Bh" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732218906424, 1732215895383, 1729758967410, 1730653116370, 1733030150754, 1732475510966, 1734482029740, 1732714546651, 1732674084909, 1732783074571, 1732475653074, 1732570046500, 1733004665811, 1732217113253, 1730625101870, 1732218061528, 1737523546376, 1732219092916, 1730522311814, 1732216781978, 1732217686949, 1733025614887, 1732475572184, 1732475731869 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Reviewer_CcR5" ], [ "ICLR.cc/2025/Conference/Submission2974/Reviewer_coJ5" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Area_Chair_indD" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Reviewer_CcR5" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Reviewer_MURz" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Reviewer_MURz" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Reviewer_jsAN" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Reviewer_coJ5" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ], [ "ICLR.cc/2025/Conference/Submission2974/Authors" ] ], "structured_content_str": [ "{\"title\": \"Author Response to Reviewer CcR5 (1/2)\", \"comment\": \"**Q1:** I still wonder about the specific meaning of \\\"4D texturing\\\" for the object and how this differs from a 3D model that is first textured and then animated using skinning techniques. Even for dynamic textures, one could also generate the dynamic texture for a 3D model and then animate the character through skinning. This approach seems useful if the mesh is also significantly dynamic, such as with topology changes.\\n\\n**A1:** We thank the reviewer for these insightful comments. \\\"4D texturing\\\" in our work refers to the process of generating temporally and spatially consistent textures for dynamic mesh sequences over time. This differs from traditional pipelines that generate static 3D textures first and subsequently animate the mesh through skinning techniques. While traditional methods can handle static textures effectively, they fall short in scenarios requiring dynamic, time-varying textures that reflect changes such as lighting variations, motion-specific deformations (e.g., wrinkles), or stylized animations. In our additional results, the `snowman` case exhibits significant dynamic changes. We suggest the reviewer kindly refer to our updated paper (**Fig. 13**) and supplementary video.\\n\\n---\\n\\n**Q2:** Could one first perform 3D texturing and then render it with a generative background video? Clarify how the proposed method handles dynamic meshes with topology changes, if applicable. Compare the method with a pipeline of 3D texturing followed by rendering with a generative background video, highlighting any benefits of the presented \\\"integrated\\\" approach. Could you provide examples where dynamic 4D texturing is essential and cannot be achieved through traditional methods?\\n\\n**A2:** We thank the reviewer for providing this insightful suggestion. Please kindly refer to [General Response (A)](https://openreview.net/forum?id=0Lpz2o6NDE&noteId=wavOGNiB1b) for the discussion with traditional textured mesh animations. We suggest the reviewer go through our updated supplementary video for the dynamic results and comparison with textured mesh animations.\\n\\n---\\n\\n**Q3:** I also have concerns about the novelty of the paper. The entire pipeline can be seen as a depth-conditioned CTRL-Adapter for mesh sequence texturing with UV space aggregation, which feels like a straightforward composition of existing models. Provide a clearer definition and motivation for \\\"4D texturing\\\" to help readers understand its significance in the context of their work.\\n\\n**A3:** We thank the reviewer for the suggestion to provide a clearer definition and motivation for \\\"4D texturing\\\" and we have added these in our paper (updated in Abstract (**L034-045**) and Introduction (**L057-060, 085-087**)). For the consideration of novelty, our method is the first to handle the task of 4D scene texturing and demonstrate the capability of video diffusion model in offering temporal variations with mesh guidance. In addition, we discuss our motivation in [General Response (B)](https://openreview.net/forum?id=0Lpz2o6NDE&noteId=wavOGNiB1b), and you can kindly refer to it.\\n\\n---\\n\\n**Q4:** I understand the difficulty in evaluating the results, but it would be helpful and necessary to conduct an evaluation of Appearance Quality, Spatio-temporal Consistency, and Consistency with Prompt via a user study - The quantitative evaluation is insufficient.\\n> The following action can be taken: Conduct a user study evaluating the specific aspects, e.g., Appearance Quality, Spatio-temporal Consistency, and Consistency with Prompt, and compare the proposed method with previous models.\\n\\n**A4:** We have already conducted the user study with these metrics in the initial edition (Appearance Quality, Spatio-temporal Consistency, and Consistency with Prompt). Please kindly refer to **Table 1** and **5.2 Quantitative Evaluation (L503-509)** for details.\\n\\n---\\n\\n**Q5:** Limitations and Future Works should be included.\\n\\n**A5:** Thanks for the suggestion. We provide a \\\"Limitation and Discussion\\\" section in the appendix. We outline the potential limitation in scene-level texturing due to the limited dataset, and leave this for our future work. In addition, we discuss the computation time compared with the 3D texturing method and we believe the computation time may be shorten with the advancement of video diffusion models.\"}", "{\"title\": \"General Response\", \"comment\": \"### (A) Comparison with Textured Mesh Animations (Reviewer `coJ5`, `jsAN`, `CcR5`)\\n\\nWe follow the insightful suggestions proposed by reviewers and conduct additional experiments to compare our method with the Text2Tex on animated meshes. Text2Tex is a static texture synthesis method. We first use Text2Tex to generate the textures in T-pose and then animate it. As shown in **Fig. 10** and our supplementary video, Text2Tex struggles to produce plausible temporal variations. Furthermore, the `ghost` and `snowman` examples generated by Text2Tex exhibit visible seams. This is because the texture synthesized in T-pose may not cover the entire object due to self-occlusions. As a result, the object will present seam artifacts when animated. Instead, our method can generate vivid consistent characters by textual prompts as shown in **Fig. 13**.\\n\\n---\\n\\n### (B) Motivation for 4D Scene Texturing (Reviewer `coJ5`, `jsAN`, `CcR5`)\\n\\nOur objective is to texture 4D scenes while capturing temporal variations, such as lighting changes, wrinkles, dynamic effects to produce vivid visual results\\u2014a key requirement in downstream tasks like character generation.\\nWe agree that texturing a mesh and subsequently animating it is a straightforward approach that aligns with traditional graphics pipelines. However, this approach involves significant post-processing steps, such as lighting adjustments and appearance transformations, to achieve the final visual quality. These steps are labor-intensive and require specialized expertise by artists. Our goal is to alleviate these challenges using video diffusion models.\\nTo emphasize the temporal changes in the generated textures, we also have designed some prompts, for example, `flashed a magical light`, `dramatic shifts in lighting`, `cyberpunk style` in our experiments (**Fig. 13** and **updated video**). Manually creating these sequences would require per-frame texture design, which is much less efficient compared to our method.\\n\\n---\\n\\n**References**\\n\\n\\n*[1] Chen, Dave Zhenyu and Siddiqui, Yawar and Lee, Hsin-Ying and Tulyakov, Sergey and Nie\\u00dfner, Matthias. Text2tex: Text-driven texture synthesis via diffusion models. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023.*\"}", "{\"summary\": \"# Summary\\n\\nThis paper focuses on creating temporally consistent and realistic textures for mesh sequences. The input is an untextured mesh sequence and a text prompt. To achieve this, a method named Tex4D is proposed, which is a zero-shot approach that integrates geometry information from mesh sequences with video diffusion models, specifically the depth-conditioned CTRL-Adapter (Lin et al., 2024), to produce multi-view and temporally consistent 4D textures. The model synchronizes the diffusion process across different views through latent aggregation in the UV space. Additionally, a reference latent texture is introduced to strengthen the correlation between frames during the denoising process.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"# Strengths\", \"The paper is well-written, making it easy to understand and follow.\", \"The related works are sufficiently covered.\", \"Several experiments are conducted for demonstrating its effectiveness.\"], \"weaknesses\": \"Regarding the contribution and motivation, I still wonder about the specific meaning of \\\"4D texturing\\\" for the object and how this differs from a 3D model that is first textured and then animated using skinning techniques. Even for dynamic textures, one could also generate the dynamic texture for a 3D model and then animate the character through skinning. This approach seems useful **if** the mesh is also significantly dynamic, such as with topology changes. Further, when it comes to the video background, I noticed in the supplementary video that there are some dynamic effects from the proposed method, but they are not that significant. Could one first perform 3D texturing and then render it with a generative background video?\\n\\nI think the following suggestions could be helpful in justifying the significance of the setting in this paper.\\n- Provide specific examples or use cases where 4D texturing offers advantages over traditional 3D texturing and animation.\\n- Clarify how the proposed method handles dynamic meshes with topology changes, if applicable.\\n- Compare the method with a pipeline of 3D texturing followed by rendering with a generative background video, highlighting any benefits of the presented \\\"integrated\\\" approach.\\n\\nI also have concerns about the novelty of the paper. The entire pipeline can be seen as a depth-conditioned CTRL-Adapter for mesh sequence texturing with UV space aggregation, which feels like a straightforward composition of existing models. I would prefer to see a simple yet effective method of tackling a critical problem. However, as I am still uncertain about the meaning/significance of \\\"4D texture,\\\" this makes me somewhat skeptical about the proposed pipeline.\", \"i_think_the_authors_could_provide_some_arguments_for_the_novelty_of_the_proposed_method\": [\"Highlight the key technical innovations in the pipeline beyond the composition of existing models.\", \"Explain how the introduced method addresses specific challenges in 4D texturing that are not easily solved by existing methods.\", \"Provide a clearer definition and motivation for \\\"4D texturing\\\" to help readers understand its significance in the context of their work (similar to the previous questions).\", \"I understand the difficulty in evaluating the results, but it would be helpful and necessary to conduct an evaluation of Appearance Quality, Spatio-temporal Consistency, and Consistency with Prompt via a user study - The quantitative evaluation is insufficient.\"], \"the_following_action_can_be_taken\": [\"Conduct a user study evaluating the specific aspects, e.g., Appearance Quality, Spatio-temporal Consistency, and Consistency with Prompt, and compare the proposed method with previous models.\", \"Limitations and Future Works should be included.\", \"For instance, the authors may discuss the current limitations of their approach, such as some failure cases or more specifically, the types of meshes or textures it struggles with, etc.\", \"Potential future improvements or extensions to the method.\", \"Broader implications or applications of this work in related fields.\", \"# Minor Comments\", \"\\\"To resolve this issue, we analyze the underlying causes and propose a simple yet effective modification to the DDIM (Song et al., 2020) sampling process.\\\" In the introduction section, it would be beneficial to briefly explain how you achieved this.\"], \"questions\": \"Q: \\\"While these methods produce multi-view consistent textures for static 3D objects, they do not address the challenge of generating temporally consistent textures for mesh sequences.\\\" I would appreciate further clarification on the motivation behind \\\"generating temporally consistent textures\\\" for mesh sequences. Could you provide examples where dynamic 4D texturing is essential and cannot be achieved through traditional methods?\", \"q\": [\"How does the model ensure robustness when dealing with varying UV mapping results?\", \"How sensitive is the method to different UV unwrapping techniques?\", \"Did the authors experiment with different UV mapping strategies, and if so, what were the results?\", \"Are there any limitations or best practices for UV mapping when using this method?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a novel framework for generating textures for mesh sequences. The authors utilize a depth-conditioned video diffusion model to ensure temporal consistency in videos generated from rendered mesh sequences for each predefined viewpoint. To achieve multi-view consistency, they adopt a UV space texture aggregation strategy. Additionally, they propose a modified sampling approach to address the issue of blurriness in the generated textures.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"See below.\", \"weaknesses\": \"See below.\", \"questions\": \"Although the experiments provide some evidence of the proposed method\\u2019s effectiveness, several concerns remain:\\n\\n1. Could the authors provide additional qualitative and quantitative comparisons in the ablation study? With only one demonstration, it is difficult to convincingly assess the effectiveness of the proposed method.\\n\\n2. The authors suggest that video diffusion models struggle with multi-view consistent texturing for 3D mesh sequences due to a lack of 3D geometry awareness. However, the approach already uses a depth-aware video diffusion model, which inherently includes some geometric awareness. Why does this straightforward combination not achieve the desired consistency? Does this imply that depth-aware video diffusion models alone cannot guarantee multi-view consistency even with depth information? If so, could the authors provide performance metrics or visual comparisons showing results when using only the depth-conditioned video diffusion model as a prior? Additionally, for a single viewpoint, does the video diffusion model produce temporally consistent results? If not, visual examples would help clarify.\\n\\n3. Since a mesh input is available, a straightforward approach could be to texture the mesh on the first frame using methods like Text2Tex or SceneTex, then animate the textured mesh. This method might improve efficiency and naturally maintain multi-view consistency across frames. How does this alternative approach compare in terms of both methodology and performance? An in-depth discussion of these differences would be beneficial.\\n\\n4. The authors mention that for each predefined viewpoint, a sequence of K rendered meshes is used as input and individually textured by the depth-guided diffusion model. Could the authors clarify the motivation behind this setup? Since the videos are generated separately for each view, multi-view inconsistencies are expected. Why introduce this setup if it inherently leads to consistency issues at the start?\\n\\n5. While using UV textures for each mesh can enhance multi-view consistency, this approach seems more like an averaging of multiple viewpoints to produce a smoother result. Can the authors elaborate on how this averaging mechanism ensures true multi-view consistency?\\n\\n6. Given that the current method requires rendering V views for each mesh in the sequence, which may be computationally intensive, could the authors discuss the efficiency of the method? Details on the time required to process a sample would help assess the method's practicality.\\n\\n7. It would be beneficial to include video visualizations or comparative examples to further illustrate the method's performance and effectiveness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your suggestions and detailed feedback. We would like to further clarify the points you raised.\\n\\n1. We appreciate your suggestion regarding additional examples. We will provide more visual examples and include quantitative results from our user study in the appendix in our revision.\\n\\n2. (and 4) We think the main concern from the reviewer's side is about the plausibility of the latent aggregation strategy. There are some prior works have demonstrated the effectiveness of generating high-quality textures for static 3D meshes in the T-pose using the texture aggregation in the latent space, such as SyncMVD [1] and Meta 3D TextureGen [2]. Our method denoises the latents in temporal batches and integrates with the multi-view aggregation strategy to maintain the consitency. In addition, we have experimented the direct latents blending strategy in Fig. 3, which causes the oversmooth problem because of the \\\"variation shifts problem\\\" discussed in L305-309. We overcome this issue by rewriting the denoising formula. As shown in the qualitative results, our method produces detailed texture sequences (Fig. 7) and consistent visual appearances (Fig. 12 and Fig. 13).\\n\\n3. Thanks for the suggestion. We have included a video comparison in our supplementary material and since these examples are generated using the same assets, they already appear in Fig. 1 (1st case) and Fig. 13 (2nd and 3rd cases), respectively. As such, we refrained from duplicating these comparisons in the main text.\\n\\nHope these clarifications address your concerns. Thank you again for your detailed review and for engaging with our work.\\n\\n---\\n\\n**References**\\n\\n*[1] Yuxin Liu, Minshan Xie, Hanyuan Liu, Tien-Tsin Wong. Text-Guided Texturing by Synchronized Multi-View Diffusion. SIGGRAPH Asia. 2024.*\\n\\n*[2] Raphael Bensadoun and Yanir Kleiman and Idan Azuri and Omri Harosh and Andrea Vedaldi and Natalia Neverova and Oran Gafni. Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects. arXiv preprint arXiv:2407.02430*\"}", "{\"title\": \"Follow-Up on Rebuttal Discussion\", \"comment\": \"Dear Reviewer coJ5\\n\\nWe sincerely appreciate your reviews and comments on our paper. Since the discussion phase ends on Nov 26, we would like to know whether we have addressed all your concerns. Please let us know if you have further questions after reading our rebuttal.\\n\\nWe hope to address all the potential issues during the discussion period. Thank you once again for your time and effort in reviewing our submission.\\n\\nSubmission#2974 Authors\"}", "{\"metareview\": \"This paper introduces a method for generating temporal and multi-view consistent textures for a mesh sequence in a training-free manner using a pretrained depth-conditioned video diffusion model. The proposed method builds upon previous 3D texture generation methods such as SyncMVD and TexGen, but adapts these ideas for 4D texture generation. The authors propose generating the foreground and background separately and then aggregating them. They also suggest a method to improve the denoising process and reduce blurry results. The experimental results include comparisons with previous methods, demonstrating a lower FVD score of the proposed method.\\n\\nAll reviewers gave negative feedback and were not convinced about the need for a 4D generation method, as it is also possible to generate 3D textures and animate the object. While the authors emphasized in the rebuttal that 4D generation can incorporate lighting changes, wrinkles, and dynamic effects, the additional results were not convincing to all the reviewers and the AC. Furthermore, although the authors claim that applying these changes in the graphics pipeline requires significant labor, if users can animate a mesh, changing lighting and adding dynamic effects would not be much more burdensome.\\n\\nRegarding the technical contributions, from the AC's perspective, the proposed method appears to be a straightforward extension of the 3D texture generation approach to 4D using a video generative model, with some engineering techniques such as foreground/background separation. However, it does not seem to provide a sufficient technical contribution to meet the ICLR standards.\", \"additional_comments_on_reviewer_discussion\": \"Please see the Metareview.\"}", "{\"comment\": \"Thank you for your further clarification and thoughtful suggestions. One of the primary goals of our work is to combine the expressive capabilities of video diffusion models with the consistency inherent in meshes and textures for the creation of consistent and controllable character animations, as the current video diffusion models lack the ability to ensure the multi-view consistency for characters, and the dynamic texture creation is labor-intensive. We also demonstrate that video diffusion models can offer temporal variations for dynamic textures. To the best of our knowledge, no prior work has explored dynamic texture creation using diffusion models in a zero-shot manner.\\n\\nWe appreciate your feedback and will consider explicitly emphasizing dynamic texture creation (with illustrative examples) as part of our contributions in the revision.\"}", "{\"title\": \"reply to authors\", \"comment\": \"Thank you for your detailed response. Most of my concerns have been addressed, but I still can not agree with the explanation in A1: \\\"...This differs from traditional pipelines that generate static 3D textures first and subsequently animate the mesh through skinning techniques...\\\" When I referred to \\\"dynamic texture,\\\" I meant animatable textures, which could represent true dynamic textures. For clarification, you might refer to some examples available in asset stores. From the video, I feel the texture dynamic patterns are limited (not that dynamic and most results are just with some simple patterns).\\nWhile I now see some merit in this work as a *\\\"first attempt\\\"* to tackle this problem, I remain unconvinced by the arguments presented in A1. I have adjusted my score from 3 to 5, as some of the answers effectively addressed my concerns. However, overall, I remain critical of the underlying assumptions and approach of this work.\"}", "{\"comment\": \"Dear Reviewer MURz,\\n\\nThanks for your suggestions. We would appreciate it if you could provide more details on why and how the soft mask could be incorporated into our approach. Additionally, we would like to point out that video inpainting methods are typically view-dependent, which limits their ability to provide consistent backgrounds (similar as we discussed in **D.1**).\"}", "{\"title\": \"Follow-Up on Rebuttal Discussion\", \"comment\": \"Dear Reviewer jsAN\\n\\nWe sincerely appreciate your reviews and comments on our paper. Since the discussion phase ends on Nov 26, we would like to know whether we have addressed all your concerns. Please let us know if you have further questions after reading our rebuttal.\\n\\nWe hope to address all the potential issues during the discussion period. Thank you once again for your time and effort in reviewing our submission.\\n\\nSubmission#2974 Authors\"}", "{\"comment\": \"Dear authors,\\n\\nThanks for the rebuttals. \\n* For Q2, I would actually doubt the non-seamless blending origins from the direct mask applied in the equation (7). There should be a soft mask applied to foreground-background, rather than just a simple 1s 0s mask. \\n* For Q3, I understand that this is the first work to address 4D scene texturing, however, a straightforward baseline approach could involve using a 4D mesh texturing method to generate the dynamic foreground object, combined with video inpainting to create a consistent background. \\n\\nGiven the thoughts above, I would remain my current rating unchanged.\"}", "{\"title\": \"Please urge reviewers to participate in discussion\", \"comment\": \"Dear AC and SAC,\\n\\nWe have provided answers and explanations to reviewers' questions. As some reviewers have not engaged in discussions despite several follow-up emails, can you send an email to urge them to participate before the deadline on Dec 2?\\n\\nThank you,\"}", "{\"title\": \"Author Response to Reviewer coJ5 (2/2)\", \"comment\": \"**Q5:** While using UV textures for each mesh can enhance multi-view consistency, this approach seems more like an averaging of multiple viewpoints to produce a smoother result. Can the authors elaborate on how this averaging mechanism ensures true multi-view consistency?\\n\\n**A5:** The UV space serves as a global reference and we hope to merge the information from different views to the UV space, which ensures the global consistency. Inspired by some mesh texturing methods (e.g., SyncMVD, Meta TexureGen), we simply uses the weighted aggregation to merge latents observed from different views.\\n\\n---\\n\\n**Q6:** Given that the current method requires rendering V views for each mesh in the sequence, which may be computationally intensive, could the authors discuss the efficiency of the method? Details on the time required to process a sample would help assess the method's practicality.\\n\\n**A6:** We have included average computation times in the appendix for clarity. Our method requires approximately **30 minutes** per sequence, which is comparable to static texture generation methods like Text2Tex (**22 minutes** for a static texture). The computation time primarily depends on the foundation model (CTRL-Adapter), which takes approximately **5 minutes** to generate a video with 24 frames. We anticipate significant efficiency improvements with advancements in conditioned video diffusion models, further enhancing the practicality of our approach.\\n\\n---\\n\\n**Q7:** It would be beneficial to include video visualizations or comparative examples to further illustrate the method's performance and effectiveness.\\n\\n**A7:** We have provided additional experiments highlighting temporal changes using textual prompts such as `flashed a magical light`, `dramatic shifts in lighting`, `cyberpunk style` in **Fig. 13** and included these results along with video visualizations in the supplementary materials. Also, we have updated the main paper with **Fig. 7~Fig. 13** for comprehensive comparisons.\"}", "{\"summary\": \"This paper proposed a 4D scene texturing approach by video diffusion models, a zero-shot pipeline for generating temporally and multi-view consistent textures. In order to aggregate multiview latents in UV space, they discovered the issue of \\\"variance shift\\\" caused by their aggregation, and proposed to modify DDIM sampling process to address the issue. By UV blending during denoising steps, the issue of self-occlusion is addressed and synchronized in invisible regions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors assert that this is the first method developed specifically for 4D scene texturing.\\n2. The authors introduce a multi-frame consistent texture generation technique, demonstrating improved consistency in results compared to baseline methods.\\n3. The paper is fluent and well-written, contributing to its readability and overall clarity.\", \"weaknesses\": \"1. The generated textures do not blend seamlessly with the background, creating a disjointed appearance that resembles separate foreground and background elements stitched together.\\n2. Despite claims of multi-view consistency, flickering effects are observed across different views, indicating instability in rendering.\\n3. Some of the compared methods, such as TokenFlow and Text2Video-Zero, do not utilize mesh or depth inputs, making direct comparisons less equitable.\", \"questions\": \"1. In the paper, the authors mentioned that the mesh texture could be significantly influenced by the background, providing an example with a white background. I\\u2019m curious how the generated texture might look if a non-white background was used, especially one that contrasts strongly with the foreground object. How would such a background affect the consistency and quality of the generated texture?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer jsAN\", \"comment\": \"**Q1:** The importance of this pipeline should be further clarified, such as by comparing it with pipelines based on 2D poses or textured meshes. The paper should include more comprehensive comparisons to highlight the contribution of the pipeline. For example, is it a reasonable pipeline to first generate textured meshes and then use animated meshes for video generation? Would using textured meshes yield better outcomes?\\n\\n**A1:** We appreciate the reviewer\\u2019s insightful comment. To address the concern, we have added visual comparisons of traditional textured mesh animations in **Fig. 10**, accompanied by a detailed video comparison in the supplementary materials. Additionally, our pipeline is designed to handle general characters, not limited to human figures, where 2D poses may not always be available. This flexibility broadens the scope of our approach. To further support this, we have included more visual results in **Fig. 13** and the supplementary videos, showcasing the versatility and contributions of our method in comparison to existing pipelines. We also compared our method with the traditional textured mesh animations (based on Text2Tex), please kindly refer to [General Response (A)](https://openreview.net/forum?id=0Lpz2o6NDE&noteId=wavOGNiB1b).\\n\\n---\\n\\n**Q2:** Animation can drive the mesh. Are the position and rotation of the mesh manually controlled, such as in the second example on the first page? How is the animated mesh obtained?\\n\\n**A2:** We note that our work assumes animated mesh sequences as inputs and focus on generating plausible dynamic textures, instead of mesh animation. Specifically, we obtain the animations from Mixamo and Sketchfab websites. The Mixamo assets include skeleton rotations, rigging, and skinning weights. We animate the mesh by linear blend skinning based on the pre-defined skeleton hierarchy. The Sketchfab provides complete animations and we extract the vertices and faces using blender software.\\n\\n---\\n\\n**Q3:** We observe some temporal changes in Figure 5. Is this one of the contributions of your paper? What are the advantages of generating videos with untextured meshes compared to textured meshes? \\n\\n**A3:** Yes, capturing temporal changes is one of the contributions of our paper. Our method is to capture the temporal changes (e.g., lighting controls, appearance transformations) using video diffusion model, which is hard to obtain with textured mesh animations, please kindly refer to [General Response (B)](https://openreview.net/forum?id=0Lpz2o6NDE&noteId=wavOGNiB1b) for our discussion.\\n\\n---\\n\\n**Q4:** How do you distinguish between temporal changes and temporal inconsistencies, as there are some temporal inconsistencies in your results? \\n\\n**A4:** Temporal changes and temporal inconsistencies are not necessarily in conflict but represent different aspects of temporal dynamics. Temporal changes refer to deliberate and meaningful variations over time, such as lighting shifts, surface deformations (e.g., wrinkles), or stylistic transformations, which are essential for capturing realism and dynamism in animations. Temporal incosistency usually refers to sudden unrealistic changes caused by limitations of the texture generation methods. We suggest the reviewer kindly refer to **Fig. 13** and our supplementary video for these results.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Author Response to Reviewer CcR5 (2/2)\", \"comment\": \"**Q6:** How does the model ensure robustness when dealing with varying UV mapping results? How sensitive is the method to different UV unwrapping techniques? Did the authors experiment with different UV mapping strategies, and if so, what were the results? Are there any limitations or best practices for UV mapping when using this method?\\n\\n**A6:** Our method does not rely on a dedicated UV initialization, unlike some texture-painting methods (e.g., Paint3D, Meta TextureGen) edited by artists. Our method utilizes XATLAS to unwrap the UV maps from meshes without human labors. XATLAS is a widely used library for mesh parameterization commonly integrated into popular tools and engines, facilitating efficient texture mapping in 3D graphics applications. We visualize the texture sequences generated by our method in **Fig. 7**.\\n\\n---\\n\\n**Q7:** Minor Comments: \\\"To resolve this issue, we analyze the underlying causes and propose a simple yet effective modification to the DDIM (Song et al., 2020) sampling process.\\\" In the introduction section, it would be beneficial to briefly explain how you achieved this.\\n\\n**A7:** Thanks for the suggestion. We have updated this sentence in our paper (**L092-095**) for a more detailed explanation.\\n\\n---\\n\\nWe sincerely thank the reviewer for providing perceptive comments and suggestions on our work. We have significantly revised our paper based on your valuable suggestions. Specifically, we have added clearer definition and motivation for 4D texturing, and discussed the limitation of traditional textured mesh animations before introducing our method, together with visual comparison with textured mesh animations, texture visualization, and additional discussions. We believe these points will strengthen our work.\"}", "{\"summary\": \"This paper introduces 4D scene texturing to generate textures that are consistent both temporally and across multiple views for animated mesh sequences. Tex4D, uses 3D geometry to synchronize diffusion processes, incorporates video generation model insights for temporal consistency, and modifies the DDIM sampling process to improve texture clarity.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is the first work to perform video generation based on animated mesh sequences, while its UV mapping strategy ensures multi-view consistency. The experimental results show significant advantages compared to some existing works.\", \"weaknesses\": \"Under the current pipeline, this work has yielded highly effective results. However, the importance of this pipeline should be further clarified, such as by comparing it with pipelines based on 2D poses or textured meshes. The paper should include more comprehensive comparisons to highlight the contribution of the pipeline. For example, is it a reasonable pipeline to first generate textured meshes and then use animated meshes for video generation?\", \"questions\": \"Overall, the experimental results are quite satisfactory; however, there is a lack of explanation regarding the advantages of the pipeline compared to other pipelines using 2D poses and textured meshes.\\n\\nAnimation can drive the mesh. Are the position and rotation of the mesh manually controlled, such as in the second example on the first page?\\n\\nHow is the animated mesh obtained? We observe some temporal changes in Figure 5. Is this one of the contributions of your paper? How do you distinguish between temporal changes and temporal inconsistencies, as there are some temporal inconsistencies in your results?\\n\\nWould using textured meshes yield better outcomes? What are the advantages of generating videos with untextured meshes compared to textured meshes?\\n\\nWhat is the difference between using a 2D pose and an animated mesh?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer coJ5 (1/2)\", \"comment\": \"**Q1:** Additional qualitative and quantitative comparisons in the ablation study. With only one demonstration, it is difficult to convincingly assess the effectiveness of the proposed method.\\n\\n**A1:** We follow Reviewer `coJ5` and `MURz`'s suggestion and include one more example for UV reference module ablation in **Fig. 9** and more ablation experiments on backgrounds in **Fig. 8** (alternative noise shuffle strategy used in SyncMVD (a), highly contrast with foreground (b), another case with `Ironman` \\\\(c\\\\)). Our findings suggest that the background initialization indeed affects the appearance of the texture and the background shuffling strategy used in static texture generation may not be applicable in our pipeline.\\n\\n---\\n\\n**Q2:** The authors suggest that video diffusion models struggle with multi-view consistent texturing for 3D mesh sequences due to a lack of 3D geometry awareness. However, the approach already uses a depth-aware video diffusion model, which inherently includes some geometric awareness.\\nWhy does this straightforward combination not achieve the desired consistency? Does this imply that depth-aware video diffusion models alone cannot guarantee multi-view consistency even with depth information? If so, could the authors provide performance metrics or visual comparisons showing results when using only the depth-conditioned video diffusion model as a prior? Additionally, for a single viewpoint, does the video diffusion model produce temporally consistent results? If not, visual examples would help clarify.\\n\\n**A2:** We appreciate the reviewer\\u2019s observation regarding the role of depth-aware video diffusion models in achieving multi-view consistency. While depth maps indeed provide some geometric information and allow consistent results within a single viewpoint (as demonstrated by the CTRL-Adapter), they are inherently view-dependent and lack global spatial information encoded by the 3D mesh, such as the absolute position of points in a world coordinate system. This limitation prevents consistent character generation and texture generation across different views. In **Fig. 11**, we have added three experiments to demonstrate that depth-conditioned diffusion models alone fail to ensure a global consistent texture. The results highlight how depth maps, despite their localized geometric guidance, cannot resolve inconsistencies across different viewpoints.\\nIn contrast, UV maps act as a global constraint by mapping 3D spatial positions to consistent 2D UV coordinates $(x, y, z)\\\\_{xyz} \\\\rightarrow (x\\u2019, y\\u2019)\\\\_{\\\\text{UV}}$, leveraging the point correspondence of the same points on the object in different views, thus encouraging 3D consistency.\\n\\n---\\n\\n**Q3:** Since a mesh input is available, a straightforward approach could be to texture the mesh on the first frame using methods like Text2Tex or SceneTex, then animate the textured mesh. This method might improve efficiency and naturally maintain multi-view consistency across frames. How does this alternative approach compare in terms of both methodology and performance? An in-depth discussion of these differences would be beneficial.\\n\\n**A3:** We appreciate the reviewer's suggestion for in-depth discussion of textured mesh animation, using methods such as Text2Tex for comparison. Please kindly refer to [General Response (A)](https://openreview.net/forum?id=0Lpz2o6NDE&noteId=wavOGNiB1b) for the discussion with traditional textured mesh animations (Text2Tex) and [General Response (B)](https://openreview.net/forum?id=0Lpz2o6NDE&noteId=wavOGNiB1b) for the motivation of 4D scene texturing. \\n\\n---\\n\\n**Q4:** The authors mention that for each predefined viewpoint, a sequence of K rendered meshes is used as input and individually textured by the depth-guided diffusion model. Could the authors clarify the motivation behind this setup? Since the videos are generated separately for each view, multi-view inconsistencies are expected. Why introduce this setup if it inherently leads to consistency issues at the start?\\n\\n**A4:** We aim to texuring 4D scenes and capturing temporal variations (e.g., lighting and wrinkles) within mesh sequences to produce vivid visual result, which is widely expected in downstream tasks (e.g., generating a consistent character). Depth maps alone, even when used with video diffusion models, fail to achieve this goal due to their lack of global consistency as shown in **Fig. 13**. Instead, we use UV space as a global reference which is an off-the-shelf attribute of mesh and ensures consistent texturing across all views. Our method merges the appearance from different views and aggregate the information in UV space, and render the latents from the latent UV for each view to achieve the consistency. This strategy provides a robust solution for multi-view consistency while capturing dynamic temporal details, although the initial views may not be well-aligned across different views.\"}", "{\"title\": \"Author Response to Reviewer MURz\", \"comment\": \"**Q1:** In the paper, the authors mentioned that the mesh texture could be significantly influenced by the background, providing an example with a white background. I\\u2019m curious how the generated texture might look if a non-white background was used, especially one that contrasts strongly with the foreground object. How would such a background affect the consistency and quality of the generated texture?\\n\\n**A1:** We appreciate the insightful question. In **Fig. 8**, we have included additional experiments to demonstrate the effect of varying background conditions. Specifically, we tested an alternative background noise shuffle used in SyncMVD (a) and a highly contrasting background noise initialization (b). The results indicate that the texture tends to be influenced by the high-contrast background latent initialization, leading to deviations from the intended textual prompt. This suggests that while our method can generate textures under various backgrounds, strong contrast can negatively impact both the consistency and alignment of the texture with the desired description.\\n\\n---\\n\\n**Q2:** The generated textures do not blend seamlessly with the background, creating a disjointed appearance that resembles separate foreground and background elements stitched together.\\n\\n**A2:** We agree with the observation regarding the disjointed appearance between the foreground and background elements. This issue primarily stems from the lack of a comprehensive scene-level dataset for 4D texturing, which constrains our approach. Currently, we use a shared background mesh across different views, which can disrupt overall visual consistency. Addressing the challenge of seamless integration in scene-level 4D texturing is an open problem, and we intend to explore potential solutions in future work to improve coherence between foreground and background elements. We have updated the Limitation and Discussion section in our appendix.\\n\\n---\\n\\n**Q3:** Some of the compared methods, such as TokenFlow and Text2Video-Zero, do not utilize mesh or depth inputs, making direct comparisons less equitable.\\n\\n**A3:** We acknowledge the concern regarding the fairness of comparisons with methods like TokenFlow and Text2Video-Zero, which do not utilize mesh or depth inputs. Since our work is the first to address 4D scene texturing, there are no existing methods that directly align with our setup. We have endeavored to provide a comprehensive comparison by evaluating current text-to-image (T2I) and text-to-video (T2V) methods under various conditions, **including depth, mesh**, DDIM features, and DensePose features. While depth and mesh conditions are closest to our scenario, other conditions are also relevant and widely discussed in the context of controllable video generation.\\n\\n---\\n\\nIn addition, we kindly encourage the reviewer to view our updated supplementary video, which illustrates the advantages of our method in achieving consistent character generation compared to traditional textured mesh animation methods, as discussed in [General Response (B)](https://openreview.net/forum?id=0Lpz2o6NDE&noteId=wavOGNiB1b).\"}", "{\"comment\": \"I appreciate the authors' efforts, but my concerns remain unresolved:\\n\\n1.\\tThe authors provide a few examples to demonstrate the method's effectiveness, but these are not convincing without quantitative experiments on a larger test set, as done in Table 1.\\n2.\\tIn Fig. 11, the authors present three examples of videos generated by the video diffusion model from different viewpoints. While I agree that there is significant inconsistency between viewpoints, I question whether using such highly inconsistent videos as priors is a reasonable approach. This raises further concerns mentioned in Q5: using UV textures for each mesh to enhance multi-view consistency seems to average out the high inconsistency, resulting in smoother and less detailed textures.\\n3.\\tThe authors show two examples generated by Text2Tex in Fig. 10. I acknowledge that these examples contain artifacts due to self-occlusions and other factors. How do the results from the proposed method compare for the same examples? Will it provide more detail and texture variation? A direct comparison would offer more insight.\\n4.\\tAs noted in A2, the authors claim that the current setup is intended to achieve multi-view consistent results. However, given the large variations (as shown in Fig. 11) on different view points by the diffusion model, it\\u2019s hard to believe that the proposed method can produced high-quality results. The produced consistent results are just an average of different view points. Of course, the averaged results will be consistent but at the cost of losing details in original individual view points and over-smoothness.\\n5.\\tPlease refer to A2 and A4 for further clarification.\\n\\nThe revisions have not fully addressed my concerns, and the visualizations in Fig. 11 furthter increased my concerns about the plausibility of the paper\\u2019s setup (See A2, A4). Given these ongoing concerns, I will maintain my current rating.\"}", "{\"title\": \"Follow-Up on Rebuttal Discussion\", \"comment\": \"Dear Reviewer MURz\\n\\nWe sincerely appreciate your reviews and comments on our paper. Since the discussion phase ends on Nov 26, we would like to know whether we have addressed all your concerns. Please let us know if you have further questions after reading our rebuttal.\\n\\nWe hope to address all the potential issues during the discussion period. Thank you once again for your time and effort in reviewing our submission.\\n\\nSubmission#2974 Authors\"}", "{\"title\": \"Follow-Up on Rebuttal Discussion\", \"comment\": \"Dear Reviewer CcR5\\n\\nWe sincerely appreciate your reviews and comments on our paper. Since the discussion phase ends on Nov 26, we would like to know whether we have addressed all your concerns. Please let us know if you have further questions after reading our rebuttal.\\n\\nWe hope to address all the potential issues during the discussion period. Thank you once again for your time and effort in reviewing our submission.\\n\\nSubmission#2974 Authors\"}" ] }
0LSAmFCc4p
Brain-inspired $L_p$-Convolution benefits large kernels and aligns better with visual cortex
[ "Jea Kwon", "Sungjun Lim", "Kyungwoo Song", "C. Justin Lee" ]
Convolutional Neural Networks (CNNs) have profoundly influenced the field of computer vision, drawing significant inspiration from the visual processing mechanisms inherent in the brain. Despite sharing fundamental structural and representational similarities with the biological visual system, differences in local connectivity patterns within CNNs open up an interesting area to explore. In this work, we explore whether integrating biologically observed receptive fields (RFs) can enhance model performance and foster alignment with brain representations. We introduce a novel methodology, termed $L_p$-convolution, which employs the multivariate $L_p$-generalized normal distribution as an adaptable $L_p$-masks, to reconcile disparities between artificial and biological RFs. $L_p$-masks finds the optimal RFs through task-dependent adaptation of conformation such as distortion, scale, and rotation. This allows $L_p$-convolution to excel in tasks that require flexible RF shapes, including not only square-shaped regular RFs but also horizontal and vertical ones. Furthermore, we demonstrate that $L_p$-convolution with biological RFs significantly enhances the performance of large kernel CNNs possibly by introducing structured sparsity inspired by $L_p$-generalized normal distribution in convolution. Lastly, we present that neural representations of CNNs align more closely with the visual cortex when -convolution is close to biological RFs.
[ "Lp-Convolution", "Receptive Field", "Multivariate p-generalized normal distribution", "Representation Similarity", "Visual Cortex", "Gaussian Sparsity" ]
Accept (Poster)
https://openreview.net/pdf?id=0LSAmFCc4p
https://openreview.net/forum?id=0LSAmFCc4p
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z1ptyVhFVt", "s2Ssl2wswI", "rweINRWqqU", "qHO4xBYAiY", "ikctosVXBx", "iPnldr8rjv", "fqlYldD9s1", "fUAX7zhtT7", "drAcuiVuwh", "boL2xWLbuC", "b0je3dJgae", "ZULtEeuAj8", "YgINAJVEvO", "YbarNHqXU6", "W46MlQXG3b", "TEK5zZd9zH", "Qtrgx9PiR8", "C4RbSOaQsZ", "BqPxZFEknF", "9bU3nE5vTK", "77IpULWzFt", "5KcWsMbWSm", "4qWDGtrzrV", "2urWpzmMSh" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732540308278, 1732537613467, 1737524079424, 1732941173773, 1733148675810, 1732539157201, 1732553441608, 1730715778004, 1732537104812, 1734700448009, 1732538645807, 1733148781221, 1733148730524, 1733148763678, 1730699547460, 1730208627016, 1730212305835, 1732537634295, 1732539637429, 1732942010538, 1732911004718, 1732540399420, 1732537547701, 1732538819946 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Reviewer_nN4T" ], [ "ICLR.cc/2025/Conference/Submission10826/Reviewer_7Z3f" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Area_Chair_XeSu" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Reviewer_nN4T" ], [ "ICLR.cc/2025/Conference/Submission10826/Reviewer_ozim" ], [ "ICLR.cc/2025/Conference/Submission10826/Reviewer_ZVMo" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Reviewer_7Z3f" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ], [ "ICLR.cc/2025/Conference/Submission10826/Authors" ] ], "structured_content_str": [ "{\"comment\": \"---\\n>**Weakness 5)** Limited Improvements and Not SoTA\\n\\nWe agree with the reviewer\\u2019s observation that the performance gains seem modest and far from SoTA. However, we **kindly request the reviewer to consider the distinct values of our work beyond achieving SoTA**. Specifically, our contributions lie in:\\n1. exploring the potential of novel biologically inspired inductive biases, and \\n2. developing a new, easily pluggable module for CNNs.\\n\\n### **1. Novel inductive bias** \\nWe acknowledge that the prior works that explore biological ideas focus on providing novel insights and, hence often fail to surpass original ML methods in raw performance [1-3]. Nonetheless, we have put ourselves into situations to provide not only novel insights but also practical improvement to the ML community. We have demonstrated consistent improvements in various architectures and tasks, underscoring the **robustness and versatility** of our approach. While our results are not SoTA, we believe our work has still meaningful contributions to the field by introducing novel inductive bias to the community.\\n\\n### **2. Easily pluggable module** \\nHistorically, CNNs have evolved through the introduction of innovative modules. For example, a **depthwise convolution module** was originally proposed to improve computational efficiency not SoTA in MobileNet [5] and now plays a pivotal role in SoTA architectures like ConvNeXt and RepLKNet [6, 7]. Similarly, CoAtNet, a precursor to Astroformer, leveraged a **hybrid module** combining depthwise convolution and self-attention, enabling Astroformer to achieve SoTA solely through architectural refinements [8]. In this context, our proposed **Lp-convolution module** offers practical value as a pluggable component that is easily integrated into existing architectures, facilitating **flexible and efficient deployment** (See Appendix A.21).\\n\\nWe believe that exploring novel bio-inspired algorithms and providing new convolutional modules enrich the ML community by offering both theoretical insights and practical tools. By contributing an additional design choice that enhances robustness and versatility across architectures, our work supports innovation in ML model design.\\n### **References**\\n[1] Pogodin, Roman, et al. \\\"Towards biologically plausible convolutional networks.\\\" NeurIPS (2021) \\n[2] Liu, Yuhan Helena, et al. \\\"Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators.\\\" NeurIPS (2022) \\n[3] Kao, Chia Hsiang, and Bharath Hariharan. \\\"Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning.\\\" NeurIPS (2024) \\n\\n---\\n> **Weakness 6)** Weak Justification for Large, Sparse Kernels and Unclear How RSM Benefits Contemporary Vision Tasks\\n\\n### **1. The justification for using large, sparse kernels** \\nThe use of **large kernels** enables the model to **cover the input space more effectively** with fewer layers compared to smaller kernels [1, 2]. However, simply increasing the kernel size does not guarantee performance improvements, as shown in Table 1 (Base vs. Large). This is presumably due to larger kernels inadvertently incorporating irrelevant global information, which can hinder performance compared to smaller kernels that rely on locality inductive biases to extract local features hierarchically.\\nThis is where sparsity plays a key role. We introduce **sparsity** constraints to optimize the usage of large kernels, ensuring they **focus on only relevant global information** while mitigating the disadvantages of na\\u00efvely expanding kernel sizes, supported by Sudoku experiments. \\n\\n***We will include this discussion*** in the revised manuscript to better justify our approach and clarify its effectiveness.\\n\\n### **2. RSM for Contemporary Vision Task** \\nWhile RSM analysis itself is not directly tied to contemporary vision tasks, it serves as a **critical tool to explore biologically inspired design principles that can inform about AI models**. In this study, we used RSM analysis to measure the alignment between AI models and the brain with our novel inductive bias idea, rather than relating with contemporary vision tasks such as classification, detection, or segmentation. The essential question driving this work is whether CNNs behave like the brain and, more importantly, what insights can be gained by applying brain-derived inductive biases to AI models. \\n\\n### **References**\\n[1] Ding, Xiaohan, et al. \\\"Scaling up your kernels to 31x31: Revisiting large kernel design in cnns.\\\" CVPR (2022) \\n[2] Luo et al. \\\"Understanding the effective receptive field in deep convolutional neural networks.\\\" NeurIPS (2016)\"}", "{\"comment\": \"---\\n> **Weakness 2)** Evaluation Limited to \\\"Toy\\\" Datasets\\n\\nGiven the availability of well-established large-scale datasets like ImageNet, smaller datasets such as CIFAR-100 and TinyImageNet are often perceived as 'toy' datasets. However, these **smaller datasets are pivotal for examining novel inductive biases**, as highlighted in numerous prior studies [1\\u20135]. This is because, as data size increases, ***the impact of inductive biases tends to diminish due to scaling effects [6, 7]***. For example, CNNs outperform ViTs in data-scarce regimes due to their strong locality inductive bias, while enriched data can obscure the benefits of this bias [1, 7]. Thus, the choice of these smaller datasets was essential for effectively investigating the potential of our proposed inductive biases.\\n\\nAt the same time, we fully recognize the importance of evaluating our method on closer-to-real-world datasets like ImageNet-1k to better understand its broader applicability and potential impact. As this study represents an initial exploration of a novel inductive bias, our primary goal was not to compete for SoTA performance but **to demonstrate the conceptual strength and practical utility** of our approach. Rather than training models from scratch on ImageNet-1k, we deliberately adopted a transfer learning strategy to integrate our method with ImageNet-1k pretrained SoTA models (Figure 3).\\n\\nThe transfer learning results highlight both the conceptual and practical advantages of our method (Table 4). Although the observed effect sizes were modest, **our method seamlessly integrated with pretrained models**, enhancing their adaptability without diminishing their performance. This outcome underscores the utility and flexibility of our approach, demonstrating its potential to advance transfer learning applications and contribute to the study and application of novel inductive biases.\\n\\n### **References**\\n[1] Lee et al., \\\"Vision Transformer for Small-Size Datasets.\\\" arXiv preprint arXiv:2112.13492 (2021) \\n[2] Verma et al., \\\"Manifold Mixup: Learning Better Representations by Interpolating Hidden States.\\\" ICML (2019) \\n[3] Zagoruyko et al., \\\"Wide Residual Networks.\\\" BMVC (2016) \\n[4] Feng et al., \\\"Conv2NeXt: Reconsidering ConvNeXt Network Design for Image Recognition.\\\" CAIT (2022) \\n[5] Hu et al., \\\"Unlocking Deterministic Robustness Certification on ImageNet.\\\" NeurIPS (2024) \\n[6] Bachmann et al., \\\"Scaling MLPs: A Tale of Inductive Bias.\\\" NeurIPS (2024) \\n[7] Zhang et al., \\\"ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond.\\\" IJCV (2023)\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for taking the time to respond to our writing. It genuinely means a lot to us that you\\u2019ve come back to offer feedback. We believe helping us doesn\\u2019t just mean increasing our scores. We sincerely welcome your constructive criticism at any time\\u2014it has been an invaluable foundation for shaping our research direction. Following, we would like to clarify regarding the reviewer's additional feedback.\\n\\n---\\n> **Re 1)** I do see that it's a novel inductive bias and easily pluggable \\u2013 but to what end? What problem does it solve? Being biologically inspired is not an end in itself if it does not address an open problem.\\n\\nYes, we completely agree with your point: \\u201cBeing biologically inspired is not an end.\\u201d You pointed out that it\\u2019s unclear what open problem our method is addressing, and we think this might be because we didn\\u2019t effectively communicate our problem statement. **In fact, we\\u2019ve proposed an open problem here: the \\u201cLarge Kernel Problem.\\u201d**\\n\\nLet us explain. We revisit the large kernel problem as introduced in line 58. Large kernels in CNNs often fail to show consistent performance improvements as kernel sizes increase, even when additional parameters are allocated [1]. This raises an important question in machine learning: **Can we expect performance gains by expanding kernel sizes horizontally, beyond the traditional vertical stacking of layers?**\\n\\nTo clarify this further, please take another look at Table 1. When comparing the (Base) condition with the (Large) condition\\u2014where kernel sizes are increased\\u2014we see that performance generally declines across models, except for ResNet. This underscores our key point: if we\\u2019ve successfully communicated what is the large kernel problem here, **the comparison that truly matters should be Large vs. Lp-Conv, not Base vs. Lp-Conv.** \\n\\nHere\\u2019s why our method stands out: it is explicitly designed to ensure that, regardless of the initial p-value, the Base model\\u2019s performance serves as a **lower bound** for the expected performance of large kernel CNN with Lp-Masks. This makes our approach far more stable than simply training large kernels arbitrarily, while still achieving significant performance gains.\\n\\n### **How is this possible?** \\nThis stability arises because the enlarged kernels overlaid with Lp-Masks make the **central parameters prioritized for learning as if initialized like Base model** (see Figures 2c, d, e or Figure 3). Over time, the Lp-Mask dynamically changes its conformation in a task-dependent manner by expanding, contracting, narrowing, elongating, or rotating as needed. To validate this mechanism, we performed the Sudoku task (please see trained individual Lp-Masks in Appendix A.7\\u2014it\\u2019s worth checking out how they adapted their shapes during training!). For example, in Sudoku\\u2014a combination of vertical, horizontal, or square constraints but not diagonal goals\\u2014we found that no Lp-Mask formed a diagonal shape, underscoring its adaptability. This dynamic behavior is key to the performance gains we observed.\\n\\n### **Did our method solve the Large Kernel Problem?** \\nIn a way, we believe it depends on how you interpret the results. Our approach is not just about achieving dramatic performance gains immediately but rather about providing a guaranteed lower bound on performance when enlarging kernels in CNNs.\\n\\nTo put this in perspective, **think about how vertically extending a model**\\u2014for instance, increasing ResNet-18 to ResNet-34 by doubling the number of layers\\u2014yields approximately a **1.25%** performance gain. Similarly, our method achieves a stable **2.64%** performance improvement on ResNet-18 by **horizontally extending the model size** through kernel enlargement. From our perspective, this represents a meaningful step toward addressing the Large Kernel Problem.\\n\\nBy stabilizing performance when increasing kernel sizes, our method offers a solid foundation for further exploration and refinement. We\\u2019re happy to discuss and answer more questions regarding this perspective\\u2014thank you for your thoughtful engagement!\\n\\n### **References**\\n[1] Peng, Chao, et al. \\\"Large kernel matters--improve semantic segmentation by global convolutional network.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.\"}", "{\"comment\": \"Thank you for your valuable feedback\\u2014it\\u2019s been incredibly helpful in improving our work. As today is the final day for reviews, please don\\u2019t hesitate to ask if you have any remaining questions. We\\u2019d be happy to clarify anything to ensure the work is as strong as possible.\"}", "{\"comment\": \"We sincerely thank the reviewers for their overwhelmingly positive and encouraging feedback. We **deeply appreciate the recognition of our efforts to craft a comprehensive narrative** bridging neuroscience and AI. The absence of identified weaknesses and the reviewers' enthusiasm for our approach reinforce our confidence in the significance and robustness of this work. This feedback inspires us to further explore and advance the potential of biologically inspired neural networks.\\n\\n\\n---\\n> **Question 1)** Connections with Anisotropic Diffusion \\n\\nThank you for your insightful and constructive comment. We greatly appreciate the suggestion to explore connections between Lp-convolution and anisotropic diffusion (Perona & Malik, 1990), as well as the broader scale-space theory outlined by Koenderink (1987) and Lindeberg (1994). While our current work emphasizes practical and empirical aspects of Lp-convolution, we agree that its theoretical mapping to diffusion processes could be a fascinating direction for future research.\\n\\nIn particular, the parameterized adaptability of p in Lp-convolution provides a natural mechanism to bridge isotropic and anisotropic processes. For example, **varying p could function similarly to the diffusion coefficient in anisotropic diffusion, dynamically controlling feature emphasis and smoothing based on local structures**. This adaptability could also extend scale-space representations by offering a flexible, multi-scale feature extraction framework. Thank you again for this stimulating idea, which offers valuable guidance for future exploration. We will carefully consider this perspective in our ongoing and future work. Again, we are truly delighted to receive such positive evaluations of our research. \\n\\n---\"}", "{\"title\": \"Thanks for the detailed response\", \"comment\": \"Thanks for the detailed response and additional experiments. I have no further questions.\"}", "{\"summary\": \"The paper proposes a brain-inspired approach of constraining the weights of convnets with a p-generalized Gaussian envelope. The authors demonstrate some minor improvements in performance on relatively small image datasets such as CIFAR-100 and TinyImageNet. They further claim that the learned representations of more \\\"brain-like\\\" convnets have higher representational similarity to the mouse visual system than their more classical counterparts.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"Novel inductive bias for convnets motivated by biology\", \"Overall fairly well-written paper\", \"Well-motivated and well-executed experiments\"], \"weaknesses\": \"- The effect sizes are really small, calling into question the practical impact\\n- Only \\\"toy\\\" datasets are explored\\n- Experiment on representational similarity not convincing\\n\\n\\n### Detailed explanation\\n\\nWhile I find the paper well motivated and the idea original, I see the paper mostly as a negative result given the small effect sizes observed across most of the tables and the \\\"toy\\\" nature of datasets such as CIFAR-100 and TinyImageNet. The paper now has undergone several revisions that only reinforce this conclusion.\\n\\nThere is a statistically significant improvement due to Lp-Conv for some classical architectures, but not all of them (e.g. ResNet). Generally, the improvements are small (a few percent). Given that architectural modifications alone can now push accuracy on CIFAR-100 >90% (https://arxiv.org/abs/2304.05350v2), the 1\\u20132% improvements in the 60\\u201370% range feel insignificant. Experiments on more modern architectures such as RepLKNet (Table 3) show the same pattern, if anything with decreasing effect size (<1% improvement). Similarly, the transfer learning experiment using ConvNeXt-V2 (Table 4) shows close to no effect. There are no experiments on closer-to-real-world datasets like ImageNet (although that's by now a fairly standard problem that can be done on a consumer GPU), although I should say that I do not expect major effects in that experiment, either. The data simply show that the inductive bias doesn't do much.\\n\\nThe experiment on representational similarity yields equally small effect sizes, again insignificant on many architectures. In addition, the comparison is done to several mouse visual areas, some of which aren't even part of the \\\"ventral\\\" stream for which a convnet trained on image classification would be a reasonable model.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate the constructive critique from the reviewer expecting the high standards of ICLR. We acknowledge the reviewer\\u2019s concerns regarding the lack of the ImageNet-1k benchmark and the absence of SoTA results, especially in light of multiple revision opportunities. **While we share the ambition of presenting such results, at this early exploratory stage of our novel idea, we regret that we were not yet able to achieve those outcomes.** That said, we firmly believe that our research still offers meaningful contributions to the ML community, and we hope the reviewer will appreciate the broader value of our work in advancing novel directions within the field.\\nBelow, we address the concerns raised and provide clarifications where needed.\\n\\n---\\n> **Weakness 1)** Small Effect Sizes and Limited Practical Impact\\n\\nWe agree with the reviewer\\u2019s observation that the effect sizes might appear small compared to the state-of-the-art (SoTA) models. Indeed, it is impressive that Astroformer achieves 93.4%, outperforming the second-best PyramidNet (89.9%) by more than 3% solely through architectural modifications [1]. However, we **kindly request the reviewer to consider the distinct values of our work beyond achieving SoTA performance**. Specifically, our contributions lie in:\\n1. exploring the potential of novel biologically inspired inductive biases, and \\n2. developing a new, easily pluggable module for CNNs.\\n\\n### **1. Novel inductive bias** \\nWe acknowledge that the prior works that explore biological ideas focus on providing novel insights and, hence often fail to surpass original ML methods in raw performance [2-4]. Nonetheless, we have put ourselves into situations to provide not only novel insights but also practical improvement to the ML community. We have demonstrated consistent improvements in various architectures and tasks, underscoring the **robustness and versatility** of our approach. While our results are not SoTA, we believe our work has still meaningful contributions to the field by introducing novel inductive bias to the community.\\n\\n### **2. Easily pluggable module** \\nHistorically, CNNs have evolved through the introduction of innovative modules. For example, a **depthwise convolution module** was originally proposed to improve computational efficiency not SoTA in MobileNet [5] and now plays a pivotal role in SoTA architectures like ConvNeXt and RepLKNet [6, 7]. Similarly, CoAtNet, a precursor to Astroformer, leveraged a **hybrid module** combining depthwise convolution and self-attention, enabling Astroformer to achieve SoTA solely through architectural refinements [8]. In this context, our proposed **Lp-convolution module** offers practical value as a pluggable component that is easily integrated into existing architectures, facilitating **flexible and efficient deployment** (See Appendix A.21).\\n\\nWe believe that exploring novel bio-inspired algorithms and providing new convolutional modules enrich the machine learning community by offering both theoretical insights and practical tools. By contributing an additional design choice that enhances robustness and versatility across architectures, our work supports innovation in machine learning model design.\\n\\n### **References**\\n[1] Dagli, Rishit. \\\"Astroformer: More data might not be all you need for classification.\\\" arXiv preprint arXiv:2304.05350 (2023) \\n[2] Pogodin, Roman, et al. \\\"Towards biologically plausible convolutional networks.\\\" NeurIPS (2021) \\n[3] Liu, Yuhan Helena, et al. \\\"Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators.\\\" NeurIPS (2022) \\n[4] Kao, Chia Hsiang, and Bharath Hariharan. \\\"Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning.\\\" NeurIPS (2024) \\n[5] Howard, Andrew G. \\\"Mobilenets: Efficient convolutional neural networks for mobile vision applications.\\\" arXiv preprint arXiv:1704.04861 (2017) \\n[6] Liu, Zhuang, et al. \\\"A convnet for the 2020s.\\\" CVPR (2022) \\n[7] Ding, Xiaohan, et al. \\\"Scaling up your kernels to 31x31: Revisiting large kernel design in cnns.\\\" CVPR (2022) \\n[8] Dai, Zihang, et al. \\\"Coatnet: Marrying convolution and attention for all data sizes.\\\" NeurIPS (2021)\"}", "{\"metareview\": \"The paper introduces Lp-convolution, a novel approach inspired by the connectivity patterns observed in the brain\\u2019s visual cortex. By utilising the multivariate p-generalised normal distribution (MPND), the authors create Lp-masks that allow receptive fields (RFs) in CNNs to adapt in shape, scale, and orientation. This flexibility addresses a key challenge in CNN design known as the large kernel problem, where increasing kernel size often leads to diminished performance. The proposed Lp-convolution overcomes this by enabling large-kernel CNNs to achieve better performance, particularly when the Lp-mask configuration aligns with biologically inspired patterns (e.g. when p = 2, which reflects the Gaussian-like sparsity observed in biological connectivity). Furthermore, the study demonstrates that models employing Lp-convolution achieve stronger alignment with neural representations in the visual cortex, as evidenced by representational similarity analysis (RSA) with mouse visual cortex data.\\n\\nA key strength and contribution of Lp-convolution lies in its ability to adapt receptive fields in a task-specific manner, enabling CNNs to handle diverse input features more effectively. This adaptability contributes to significant improvements in model robustness, as shown in experiments with CIFAR-100-C, where models with Lp-convolution outperform traditional CNNs. The approach also allows for more efficient transfer learning, allowing existing pre-trained models to incorporate Lp-masks with minimal computational cost and performance drop. Importantly, the method is generalizable, being compatible with a wide range of architectures, from traditional models like AlexNet and ResNet to modern large-kernel architectures like RepLKNet and ConvNeXt.\\n\\nNevertheless Lp-convolution introduces certain complexities. The inclusion of trainable parameters (C and p) increases model complexity and requires careful hyperparameter tuning. Model performance is sensitive to the choice of the initial p-value, and selecting the right value for different architectures and tasks can be non-trivial. Moreover, while Lp-masks provide flexibility and task-specific adaptability, the underlying decision-making process within the model is less interpretable than simpler, more transparent CNN designs. The paper also lacks a comprehensive discussion of the computational overhead introduced by Lp-convolution, particularly in large-scale training scenarios.\\n\\nIn summary, this paper provides a neat bridging between biological and artificial intelligence by introducing Lp-convolution, a method that enhances CNN adaptability, robustness, and performance in large-kernel models. While it introduces new complexities, its biologically inspired design offers a fresh perspective on how insights from neuroscience can drive the development of more effective machine learning models, something that was recognised by the majority of the reviewers.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers had mixed opinions. Reviewer ZVMo and Reviewer nN4T were strongly in favour of accepting the paper, praising the originality and practical impact of Lp-convolution. They highlighted its potential for bridging biological and artificial visual processing, its compatibility with existing CNN architectures, and its utility in large-kernel visual processing tasks. They appreciated the authors\\u2019 clear explanations, visualisations, and comprehensive experimental validation, noting the paper\\u2019s strong presentation. Reviewer nN4T also noted the method\\u2019s potential to inspire future \\u201cbrain-inspired\\u201d works due to its unique integration with existing models.\\n\\nOn the other hand, Reviewer 7Z3f raised some key concerns. While they acknowledged the novelty of the inductive bias and its biological inspiration, they questioned its practical utility, arguing that being \\u201cbiologically inspired\\u201d was not a sufficient justification on its own. The reviewer criticised the small effect sizes and the use of \\u201ctoy\\u201d datasets like CIFAR-100 and TinyImageNet, suggesting that the results failed to provide compelling evidence of significant performance improvements. They also raised concerns about the lack of experiments on larger, real-world datasets like ImageNet and claimed that the representational similarity analysis (RSA) was unconvincing, as the observed differences were modest and not statistically significant across several architectures.\\n\\nThe authors\\u2019 responses aimed to address these points. They defended their choice of datasets, arguing that smaller datasets were essential for studying novel inductive biases and that larger datasets might obscure these effects. To address the \\u201ctoy dataset\\u201d critique, they pointed to the transfer learning results and highlighted improvements in robustness, especially on CIFAR-100-C, where their method consistently outperformed the baseline across various types of corruptions. Regarding representational similarity, they argued that while effect sizes were small, they were still biologically meaningful and aligned with previous studies on neural alignment. They further clarified that their primary goal was not to achieve state-of-the-art performance but to introduce a biologically inspired, adaptable convolutional module that could be \\u201ceasily pluggable\\u201d into existing architectures like AlexNet, ResNet, and ConvNeXt.\\n\\nThe final reviewer scores were 8, 8, 6 (bumped up from 5) and 3, which in balance provide good evidence that the paper has merits and has passed the threshold for acceptance, while recognising that it does not solve everything.\"}", "{\"comment\": \"We sincerely thank the reviewer for their thoughtful and encouraging feedback, which deeply motivates us. Below, we address the concerns raised and provide further clarifications.\\n\\n---\\n> **Weakness 1) Limited Utility Demonstrated for Large Kernel CNNs**\\n\\nThank you for your valuable feedback. We would like to address your concerns by highlighting the strengths of our research and providing our perspective on the issues raised.\\n\\n### **1. Lp-Convolution for Large Kernel Problem** \\nFirstly, we reiterate our discussion of the large kernel problem mentioned on line 58. While recent large kernel CNNs do not guarantee performance improvements when increasing the kernel size, even though additional parameters are secured (see Tables 1 and 4). This raises a critical machine learning question: **Can we expect additional performance gains by moving beyond traditional vertical layer stacking and expanding the kernel size horizontally**?\\n\\nIn Table 1 and 4, the performance gains when comparing our method with legacy models (Base vs. Lp-Conv) may appear modest. However, **when applied to large kernel models (Large vs. Lp-Conv), we observe a significant increase** regardless of the initial value of p. Specifically, in Table 3, applying Lp-Convolution to RepLKNet\\u2014a model that already utilizes large kernel sizes\\u2014resulted in a 1% performance improvement with almost no additional computational cost. We believe this constitutes a meaningful enhancement, demonstrating the practical benefits and reinforcing the potential of large kernel CNNs to improve both efficiency and performance.\\n\\n### **2. Strong Robustness with Lp-Convolution**\\nIn Table 4, we demonstrate the, the Lp-Convolution model with p=2 outperforms the Base model in 57 corruption scenarios, whereas the Base model does not achieve a single win. This consistent superiority is not only reflected in the win counts but also in the absolute performance values, which we include in the raw performance table of ConvNeXt as follows.\\n|Corruption|Base|Large|Lp2|Lp4|Lp8|Lp16|\\n|-|-|-|-|-|-|--|\\n|brightness|**23.28\\u00b17.14**|31.23\\u00b11.61|**39.26\\u00b14.76**|31.68\\u00b19.26|33.93\\u00b16.33|34.27\\u00b13.94|\\n|contrast|**17.08\\u00b16.12**|26.26\\u00b12.06|**34.91\\u00b15.46**|27.25\\u00b19.38|29.29\\u00b16.62|29.17\\u00b14.08|\\n|defocus_blur|**24.26\\u00b17.11**|31.35\\u00b11.42|**39.35\\u00b14.50**|32.45\\u00b19.11|34.46\\u00b16.20|34.88\\u00b13.45|\\n|elastic_transform|**20.40\\u00b15.67**|27.67\\u00b11.11|**33.43\\u00b14.08**|27.24\\u00b17.44|29.46\\u00b14.88|29.83\\u00b13.26|\\n|fog|**20.08\\u00b16.60**|28.48\\u00b11.95|**36.37\\u00b14.97**|29.17\\u00b19.13|31.15\\u00b16.45|31.41\\u00b14.28|\\n|frost|**17.36\\u00b15.79**|27.44\\u00b11.84|**33.68\\u00b15.21**|26.46\\u00b18.32|28.28\\u00b15.77|28.29\\u00b14.19|\\n|gaussian_blur|**24.34\\u00b17.07**|31.53\\u00b11.50|**39.37\\u00b14.46**|32.51\\u00b19.21|34.65\\u00b16.16|35.11\\u00b13.28|\\n|gaussian_noise|**20.64\\u00b15.80**|30.74\\u00b11.30|**34.57\\u00b14.24**|27.69\\u00b17.07|29.75\\u00b14.12|29.86\\u00b13.59|\\n|glass_blur|**13.96\\u00b12.99**|27.66\\u00b10.96|**24.52\\u00b14.78**|19.43\\u00b13.19|21.19\\u00b11.98|19.99\\u00b14.36|\\n|impulse_noise|**20.81\\u00b16.02**|29.87\\u00b11.49|**34.47\\u00b14.12**|27.84\\u00b17.46|29.74\\u00b14.56|29.84\\u00b13.46|\\n|jpeg_compression|**21.98\\u00b16.34**|31.09\\u00b11.42|**35.81\\u00b13.94**|29.37\\u00b17.65|31.58\\u00b14.99|31.79\\u00b13.46|\\n|motion_blur|**22.40\\u00b15.93**|29.13\\u00b11.16|**35.79\\u00b14.24**|29.44\\u00b17.69|31.49\\u00b15.44|31.93\\u00b12.75|\\n|pixelate|**24.24\\u00b16.78**|31.32\\u00b11.69|**38.56\\u00b14.42**|31.62\\u00b18.63|33.87\\u00b15.64|33.93\\u00b13.64|\\n|saturate|**11.64\\u00b14.90**|19.34\\u00b12.39|**26.10\\u00b14.28**|20.33\\u00b17.28|22.25\\u00b15.10|22.14\\u00b13.41|\\n|shot_noise|**22.07\\u00b16.28**|31.24\\u00b11.49|**36.61\\u00b14.64**|29.46\\u00b17.78|31.56\\u00b14.84|31.80\\u00b13.53|\\n|snow|**18.86\\u00b15.75**|28.61\\u00b11.55|**33.54\\u00b14.17**|27.04\\u00b17.64|29.07\\u00b14.92|28.58\\u00b13.76|\\n|spatter|**21.98\\u00b16.62**|30.63\\u00b11.62|**36.47\\u00b14.63**|29.54\\u00b18.20|31.66\\u00b15.31|31.82\\u00b13.57|\\n|speckle_noise|**21.60\\u00b16.00**|31.15\\u00b11.44|**36.48\\u00b14.50**|29.47\\u00b17.79|31.56\\u00b14.92|31.99\\u00b13.56|\\n|zoom_blur|**22.07\\u00b16.07**|28.46\\u00b10.98|**35.63\\u00b14.55**|29.33\\u00b17.30|30.97\\u00b15.18|31.73\\u00b12.70|\\n\\nWhen comparing these values, it becomes evident that our method exhibit exceptional robustness across various types of corruptions. \\n\\n### **3. Comparison with ViT** \\nThank you for your valuable feedback. While our primary focus is on improving CNN architectures, we agree with the reviewer that including a comparison to ViTs can better highlight the utility and relevance of our proposed method. To address this, we conducted experiments with ViT base models on TinyImageNet, and the results are summarized below:\\n|TinyImageNet|Top-1(%)|FLOPs(G)|Params(M)|\\n|-|-:|-:|-:|\\n|ViT-32x32|49.88| 4.37|87.6|\\n|**ViT-16x16**|54.20|16.87|86.0|\\n|AlexNet|52.25|0.71|57.82|\\n|**Lp2-AlexNet**|54.13|3.41|68.6|\\n||||\\n|Lp2-VGG-16|69.96|83.74|200.5|\\n|Lp2-ResNet-18|68.45|9.86|61.5|\\n|Lp2-ResNet-34|70.43|19.93|116.6|\\n|Lp2-ConvNeXt-T|70.72|5.42|33.8|\\n\\nAs shown in Table, Lp2-AlexNet achieves comparable performance to ViT-16x16 on TinyImageNet with significantly lower parameter counts and computational cost, demonstrating its efficiency. Thank you for raising this important point. **We will include this comparison into our revised manuscript** to provide a clearer perspective on the relationship between ViTs and CNNs.\"}", "{\"comment\": \"Thank you for your valuable feedback\\u2014it\\u2019s been incredibly helpful in improving our work. As today is the final day for reviews, please don\\u2019t hesitate to ask if you have any remaining questions. We\\u2019d be happy to clarify anything to ensure the work is as strong as possible.\"}", "{\"comment\": \"Thank you for your valuable feedback\\u2014it\\u2019s been incredibly helpful in improving our work. As today is the final day for reviews, please don\\u2019t hesitate to ask if you have any remaining questions. We\\u2019d be happy to clarify anything to ensure the work is as strong as possible.\"}", "{\"comment\": \"Thank you for your valuable feedback\\u2014it\\u2019s been incredibly helpful in improving our work. As today is the final day for reviews, please don\\u2019t hesitate to ask if you have any remaining questions. We\\u2019d be happy to clarify anything to ensure the work is as strong as possible.\"}", "{\"summary\": \"This paper introduces $L_p$-Convolution by integrating the multivariate p-generalized normal distribution into a $L_p$ masks for convolution filters. It allows the network to adapt to different receptive field shape and train efficiently with large kernel. The paper show $L_p$-Convolution has a advantage in tasks such as sudoku challenge.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The idea of a $L_p$ convolution is original and could have wide application in visual tasks that require flexible receptive field size or in general tasks that require both local and global information from visual input. Showing its ability to transfer to any pretrained network greatly lowers the threshold to apply this for a wide range of tasks. The choices of Sudoku task and the follow up ablation analysis is solid and demonstrates well the strength of this method. Out of all papers that take inspirations from neuroscience and try to utilize it to improve neural nets, this paper stands out in actually providing a fundamentally different implementation of CNNs.\\n\\nOther that than the points mentioned above, the benchmark testing was thorough and presented clearly. The visualization of both the $L_p$ mask and convolution is very helpful for understanding the concepts. The writing is very clear.\", \"weaknesses\": \"The paper does show through table 1-4 that $L_p$-CNNs can train with large kernels and have some advantage in robustness as well as accuracy in benchmark test. However the improvements over baseline models are small. I don't think these numbers convinces me how useful the $L_p$-CNNs could be. Aside from the sudoku task, the paper didn't really show the advantage of efficiently trained large kernal $L_p$-CNNs through a task that actually could really benefit from large kernels. I would suggest including some more tasks that requires processing of context, or even tasks ViT excels at for comparison.\\n\\nFor the robustness benchmark as well as the Sudoku task, it could be informative to include performance of ViTs as well. \\n\\nLastly, it is pretty well established throughout the paper that $p_{init}=2$ is the most useful for most task, and resembles the most to the biological system. I am not sure it is worth having another section (sec. 6) dedicating to comparing similarity of RSM across different $p$s. If the author were to demonstrate it is also a better model for brain representational alignment that I would recommend doing a more thorough study including more datasets, brain regions and variation of CNNs.\", \"questions\": \"See weakness for suggestions.\", \"potential_typo\": \"\", \"line_159\": \"a solution to the of large kernel problem in CNN -> a solution of the large ...\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper primarily investigates variations in local connectivity patterns within CNNs, examining whether incorporating biologically inspired connectivity structures can improve model performance and increase alignment with brain representations. Specifically, the authors introduce Lp-convolution, which utilizes the multivariate p-generalized normal distribution (MPND). The proposed adaptable Lp-masks aim to bridge the gap between artificial and biological connectivity patterns, finding optimal configurations through task-based adaptation to enable strong performance in tasks requiring flexible receptive field shapes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Experimental results indicate that CNN neural representations exhibit a stronger alignment with the visual cortex when the Lp-mask shape approximates a Gaussian distribution.\", \"Testing the conformational adaptability of Lp-masks in the Sudoku challenge yielded interesting results, highlighting the flexibility of this approach.\"], \"weaknesses\": [\"Consistency in terminology would improve clarity; alternating between \\u201cLp-convolution\\u201d and \\u201cLp-mask\\u201d can be confusing. Using a single term throughout would make the concepts easier to follow.\", \"The mention of Vision Transformers (ViTs) in the introduction feels tenuous, as they are not included in subsequent experiments, nor are they closely related to the main theme of the paper.\", \"In lines 108-110, where it is stated that \\u201cCNNs have rectangular, dense, and uniformly distributed connections, as opposed to the circular, sparse, and normally distributed connections in biological neurons,\\u201d this description would benefit from supporting references regarding the shapes of receptive fields in biological neurons. It\\u2019s also worth questioning whether this statement accurately characterizes CNN weights, as CNNs trained to model retinal ganglion cells, for instance, have demonstrated sparse weight patterns ([1]-[5]).\", \"Lines 137-138 mention that \\u201cwe optimized parameters of p and \\u03c3 in MPND (Fig.1e, Eq.1)...\\u201d However, Eq.1 and the text do not define \\u03c3. It\\u2019s also recommended that the authors confirm Eq.1\\u2019s form by referencing the standard expression of a multivariate Gaussian function.\", \"Integrating Lp-masks in CNNs does not appear to significantly improve recognition accuracy across datasets. Comparing this approach to ViTs, it\\u2019s unclear if it achieves current state-of-the-art performance.\", \"The justification for using large, sparse kernels feels somewhat weak. Aside from achieving marginal improvements in RSM alignment with the visual cortex, it\\u2019s unclear how this approach benefits contemporary computer vision tasks.\"], \"references\": \"[1] Maheswaranathan, Niru, et al. \\\"Deep learning models reveal internal structure and diverse computations in the retina under natural scenes.\\\" BioRxiv\\u00a0(2018): 340943. \\n\\n[2] Tanaka, Hidenori, et al. \\\"From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction.\\\"\\u00a0 Advances in neural information processing systems\\u00a032 (2019).\\n\\n[3] Lindsey, Jack, et al. \\\"A unified theory of early visual representations from retina to cortex through anatomically constrained deep CNNs.\\\"\\u00a0 arXiv preprint arXiv:1901.00945\\u00a0(2019).\\n\\n[4] Yan, Qi, et al. \\\"Revealing fine structures of the retinal receptive field by deep-learning networks.\\\"\\u00a0 IEEE transactions on cybernetics\\u00a052.1 (2020): 39-50.\\n\\n[5] Zheng, Yajing, et al. \\\"Unraveling neural coding of dynamic natural visual scenes via convolutional recurrent neural networks.\\\" Patterns\\u00a02.10 (2021).\", \"questions\": [\"In the authors' claim regarding \\\"Lp-convolution with biological constraint,\\\" specifically the \\\"Gaussian structured sparsity,\\\" what theoretical and empirical evidence supports this biological constraint?\", \"Across various experiments, since ppp is a learnable parameter, what typical values does it converge to, and are there any observable trends or variations across different datasets? Could the authors interpret these findings in relation to biological insights?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Lp-convolution, a novel approach to convolutional neural networks (CNNs) inspired by biological visual processing. The work addresses fundamental differences between artificial and biological visual systems: while traditional CNNs employ rectangular, dense, and uniform connectivity patterns, biological visual systems feature circular, sparse, and normally distributed connections. Additionally, the paper tackles the longstanding challenge that large kernel sizes in CNNs typically don't improve performance despite increased parameters.\\n\\nThe key innovation is the introduction of Lp-convolution, which uses multivariate p-generalized normal distribution (MPND) to bridge these biological-artificial differences. The method implements trainable \\\"Lp-masks\\\" that can adapt their shape through parameters, enabling flexible receptive field shapes that better match biological patterns. Technically, this is achieved by applying channel-wise Lp-masks that overlay onto convolutional kernels, with shape parameters that can be trained for task-dependent adaptation.\", \"the_authors_demonstrate_several_significant_findings\": \"Lp-convolution improves the performance of CNNs with large kernels, with optimal results achieved when the initial p parameter approaches 2 (matching biological Gaussian distribution). Moreover, neural representations show better alignment with the visual cortex when connectivity patterns are more biologically plausible.\", \"the_practical_impact_of_this_work_is_threefold\": \"it enables effective utilization of larger kernels in CNNs, achieves more biologically plausible artificial neural networks, and maintains compatibility with existing CNN architectures.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is overall well-written, with sections that tell a clear, sequential story. The usage of bold characters to highlight important parts is particularly appreciated. This is a very strong contribution across multiple aspects:\", \"Connectivity patterns as inductive biases are largely unexplored within this community. The biological inspiration effectively guides the search for plausible connectivity patterns, and the approach proposed in this submission is particularly apt. The work presents a complete narrative, from biological mechanism inspiration to the implementation of Lp-convolution for neural activity prediction in V1 and representational similarity analysis.\", \"The paper's approach to addressing large kernel network training challenges could potentially bridge the performance gap between transformers and CNNs in image classification tasks.\", \"The mathematical formulation is sound and accessible, with figures (e.g., Figure 1 and 2) that effectively illustrate concepts and build intuition about parameter effects.\", \"The choice of the Sudoku challenge adds significant value, serving as an excellent demonstration of the model's capabilities in an easily understandable context (especially for $L_p$ mask shapes)\", \"The Appendix comprehensively addresses potential questions, demonstrating thorough consideration of the work's implications and limitations.\"], \"weaknesses\": \"I don't think significant weaknesses are present in this work. The paper should be accepted as it is.\", \"questions\": \"**(More of a curiosity)** For future developments of this work, it would be interesting to explore connections with anisotropic diffusion (Perona & Malik, *Scale-space and edge detection using anisotropic diffusion*, 1990). In standard convolution, there exists a well-established mapping between convolution operators and isotropic diffusion processes (as explored in Scale-Space theory, particularly in Koenderink, *The structure of images*, 1987; and Lindeberg, *Scale Space theory in computer vision*, 1994). How might Lp-convolution relate to or extend these theoretical frameworks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"---\\n> **Weakness 3)** Experiment on Representational Similarity Not Convincing\\n\\nWe appreciate the reviewer\\u2019s feedback and understand the concerns regarding the representational similarity (RSM) results, specifically: \\n1) small effect sizes and modest significance \\n2) inclusion of non-ventral stream regions in the analysis.\\n\\nHowever, we respectfully argue that our RSM results are compelling and well-supported in addressing the primary objective of our study: **to explore whether introducing biologically inspired constraints into CNNs enhances their alignment with the brain.** Below, we provide detailed responses to each concern.\\n\\n**1. Small Effect Size and Modest Significance** \\nWhile the observed effect sizes in RSM analysis may appear modest, we believe our results are robust, meaningful, and well-aligned with prior research. Key points supporting this include:\\n\\n- **Robustness Across Architectures**: \\nThe observed trends consistently demonstrate that CNNs incorporating Gaussian sparsity (a biological constraint) achieve better alignment with brain representations compared to others. These trends are consistent across various CNN architectures, despite the inherent variability and complexity of biological data.\\n\\n- **Meaningful Effect Sizes Relative to Accuracy Gains**: \\nThe SSM improvements for Lp models (e.g., AlexNet) reach approximately 3%, significantly larger than the corresponding Top-1 accuracy gains of around 1%. This disparity underscores that the observed RSM differences reflect meaningful biological alignment rather than statistical noise.\\n\\n- **Reproducibility with Prior Work**: \\nWhile the absolute SSM values may seem modest, their range (0.2\\u20130.4) aligns closely with prior findings, such as those reported by Shi et al. (2019) [1]. Moreover, we executed the same codebase as previous work [2], ensuring methodological consistency and the validity of our comparisons.\\n\\nThese points collectively demonstrate that our RSM results, while modest in absolute terms, are biologically relevant and sufficiently robust to support one of our main objectives.\\n\\n**2. Inclusion of Non-Ventral Stream Regions**\\nIn response to the reviewer\\u2019s concern about including non-ventral stream regions, we note that our methodology follows established practices from prior studies [1\\u20133], which analyze a broad range of visual areas in the mouse cortex. Our rationale for including both ventral and dorsal regions is as follows:\\n\\n- **Holistic Evaluation of Representational Capacity**: \\nWhile ventral stream regions like VISp and VISl are directly relevant to \\\"what\\\" tasks such as image classification, dorsal regions (e.g., VISam, VISpm, VISal) offer insights into the broader representational capacity of CNNs. By analyzing both streams, we aimed to provide a more comprehensive evaluation of CNNs in relation to biological systems.\\n\\n- **Focused Analysis on Ventral Stream Regions**: \\nTo ensure clarity, region-specific SSM analyses are provided in **Appendix A.12**. These results clearly show that the ventral stream region VISl consistently achieves the highest SSM values across all CNN architectures. Consequently, **the maximum SSM values reported in Figure 6 are derived from VISl**, confirming that our primary analysis is appropriately focused on ventral stream processing and directly relevant to tasks like image classification.\\n\\nBy incorporating both ventral and dorsal regions, we situate our findings within a comprehensive framework for evaluating CNNs in the context of biological visual systems. The dominance of VISl in our results further substantiates the alignment of CNNs with ventral stream processing, reinforcing the validity and significance of our conclusions.\\n\\n### **References**\\n[1] Shi et al. \\\"Comparison against task driven artificial neural networks reveals functional properties in mouse visual cortex.\\\" NeurIPS (2019) \\n[2] Bakhtiari et al. \\\"The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning.\\\" NeurIPS (2021) \\n[3] Shi, Jianghong et al. \\\"MouseNet: A biologically constrained convolutional neural network model for the mouse visual cortex.\\\" PLOS Computational Biology (2022)\"}", "{\"comment\": \"We sincerely thank the reviewer for recognizing our contributions, particularly acknowledging our effort to demonstrate the effectiveness of Lp-Convolution through the Sudoku Challenge. Additionally, we deeply value the constructive feedback provided, which offers significant opportunities to enhance the quality of our paper. Below, we address the reviewer's comments and provide detailed clarifications.\\n\\n---\\n> **Weakness 1)** Inconsistent and Confusing Terminology for Key Components\\n\\nFirst, we would like to clarify the terminology: \\n- **Lp-Mask**: trainable mask $\\\\mathcal{M}$, applied to convolutional weights $\\\\mathcal{W_i}$ **(Eqn. 2)**.\\n- **Lp-convolution**: the overall convolution process incorporating the Lp-mask **(Eqn. 3)**.\\n\\nTo address this, we will revise the manuscript as follows:\\n- Clearly highlight the distinction between the two terms in **Section 3**.\\n- Use \\u201cLp-convolution\\u201d consistently throughout the text, reserving \\u201cLp-mask\\u201d for contexts where it is explicitly relevant.\\n\\nWe believe these changes will significantly enhance the clarity of the manuscript and minimize any potential confusion. Thank you for highlighting this important point.\\n\\n\\n---\\n> **Weakness 2)** Irrelevant Mention of Vision Transformers (ViTs)\\n\\nWhile ViTs are not the primary focus of our study, we included them in the introduction to provide broader insight\\u2014that CNNs can be advantageous over ViTs in data-hungry regimes due to their inductive bias. However, as the reviewer pointed out, without subsequent experimental results to support this point, we agree that our intention may not be effectively conveyed.\", \"to_resolve_this_we_compared_vits_with_lp_models_as_following_table\": \"|TinyImageNet|Top-1(%)|FLOPs(G)|Params(M)|\\n|-|-:|-:|-:|\\n|ViT-32x32|49.88| 4.37|87.6|\\n|**ViT-16x16**|54.20|16.87|86.0|\\n|AlexNet|52.25|0.71|57.82|\\n|**Lp2-AlexNet**|54.13|3.41|68.6|\\n||||\\n|Lp2-VGG-16|69.96|83.74|200.5|\\n|Lp2-ResNet-18|68.45|9.86|61.5|\\n|Lp2-ResNet-34|70.43|19.93|116.6|\\n|Lp2-ConvNeXt-T|70.72|5.42|33.8|\\n\\nAs shown in Table, Lp2-AlexNet achieves comparable performance to ViT-16x16 on TinyImageNet with significantly lower parameter counts and computational cost, demonstrating its efficiency. **We will include this result to clearly relate the ViTs and CNNs**. Thank you for raising this important point.\\n\\n---\\n> **Weakness 3)** Lack of Supporting References for Biological Comparisons\\n\\nWe agree that lines 108\\u2013110 should be carefully addressed with supporting references, as they represent a key premise of our work. Additionally, the observation that CNNs can exhibit sparse weight patterns during biological modeling is an important point that warrants further discussion.\\n\\nTo address these points, **we will revise the manuscript lines 108-110 as follows**: \\n- \\\"Standard CNN architectures are typically designed with rectangular, dense, and uniformly distributed connections [6-9], in contrast to the circular, sparse, and normally distributed connections commonly observed in biological neuron [10-12]. Early studies in biological modeling using CNNs have shown that task-specific adaptations can lead to sparse weight patterns [1-5]. These insights demonstrate the adaptability of CNNs and highlight the potential for bridging artificial and biological connectivity patterns.\\\"\\n\\nThese revisions will provide stronger support for our claims and improve the manuscript\\u2019s precision.\\n\\n### **References**\\n[1-5] See reviewer's references. \\n[6] LeCun et al. \\\"Gradient-based learning applied to document recognition.\\\" IEEE (1998) \\n[7] Krizhevsky et al. \\\"ImageNet classification with deep convolutional neural networks.\\\" NeurIPS (2012) \\n[8] Simonyan and Andrew. \\\"Very deep convolutional networks for large-scale image recognition.\\\" ICLR (2015) \\n[9] He et al. \\\"Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification.\\\" ICCV (2015) \\n[10] Lerma-Usabiaga et al. \\\"Population receptive field shapes in early visual cortex are nearly circular.\\\" J Neurosci (2021) \\n[11] Seeman et al. \\\"Sparse recurrent excitatory connectivity in the microcircuit of the adult mouse and human cortex.\\\" eLife (2018) \\n[12] Hage et al. \\\"Synaptic connectivity to L2/3 of primary visual cortex measured by two-photon optogenetic stimulation.\\\" eLife (2022) \\n\\n---\\n> **Weakness 4)** Undefined Variables and Ambiguity in Key Equations\\n\\nWe are grateful for the reviewer bringing this to our attention. We will ensure that the variable $\\\\sigma$ (defined in the **legend of Fig. 1e**) is **clearly stated in the main text** to enhance clarity and precision in our presentation. Moreover, we agree that it needs to be explicitly defined in the main text as well. We will also **properly cite the following reference** to **Eqn 1**.\\n\\n- Goodman, Irwin R., and Samuel Kotz. \\\"Multivariate \\u03b8-generalized normal distributions.\\\" Journal of Multivariate Analysis (1973)\\n\\nWe appreciate the reviewer\\u2019s sharp observation regarding the undefined variables and ambiguity in key equations.\"}", "{\"comment\": \"---\\n> **Re 2)** Again, what problem does this inductive bias solve? [...] I suggest you present the evidence. [...]\\n\\nIn addition to the Large Kernel Problem we\\u2019ve discussed in **Re 1)**, we also tried to showcase the strengths of our method through its application to transfer learning. However, the effect size from the transfer learning results might feel underwhelming to you. Reviewer thinks if we are to claim our method is effective, we need to back it up with stronger evidence.\\n\\nSo, how about we take a look at our **robustness experiments** (Table 2)? The effect size there is much more significant. For example, in the case of the ConvNeXt network, the raw values show that **our method almost doubles the robustness performance compared to the baseline** across various types of corruption.\\n\\n|Corruption|Base|Large|Lp2|Lp4|Lp8|Lp16|\\n|-|-|-|-|-|-|--|\\n|brightness|**23.28\\u00b17.14**|31.23\\u00b11.61|**39.26\\u00b14.76**|31.68\\u00b19.26|33.93\\u00b16.33|34.27\\u00b13.94|\\n|contrast|**17.08\\u00b16.12**|26.26\\u00b12.06|**34.91\\u00b15.46**|27.25\\u00b19.38|29.29\\u00b16.62|29.17\\u00b14.08|\\n|defocus_blur|**24.26\\u00b17.11**|31.35\\u00b11.42|**39.35\\u00b14.50**|32.45\\u00b19.11|34.46\\u00b16.20|34.88\\u00b13.45|\\n|elastic_transform|**20.40\\u00b15.67**|27.67\\u00b11.11|**33.43\\u00b14.08**|27.24\\u00b17.44|29.46\\u00b14.88|29.83\\u00b13.26|\\n|fog|**20.08\\u00b16.60**|28.48\\u00b11.95|**36.37\\u00b14.97**|29.17\\u00b19.13|31.15\\u00b16.45|31.41\\u00b14.28|\\n|frost|**17.36\\u00b15.79**|27.44\\u00b11.84|**33.68\\u00b15.21**|26.46\\u00b18.32|28.28\\u00b15.77|28.29\\u00b14.19|\\n|gaussian_blur|**24.34\\u00b17.07**|31.53\\u00b11.50|**39.37\\u00b14.46**|32.51\\u00b19.21|34.65\\u00b16.16|35.11\\u00b13.28|\\n|gaussian_noise|**20.64\\u00b15.80**|30.74\\u00b11.30|**34.57\\u00b14.24**|27.69\\u00b17.07|29.75\\u00b14.12|29.86\\u00b13.59|\\n|glass_blur|**13.96\\u00b12.99**|27.66\\u00b10.96|**24.52\\u00b14.78**|19.43\\u00b13.19|21.19\\u00b11.98|19.99\\u00b14.36|\\n|impulse_noise|**20.81\\u00b16.02**|29.87\\u00b11.49|**34.47\\u00b14.12**|27.84\\u00b17.46|29.74\\u00b14.56|29.84\\u00b13.46|\\n|jpeg_compression|**21.98\\u00b16.34**|31.09\\u00b11.42|**35.81\\u00b13.94**|29.37\\u00b17.65|31.58\\u00b14.99|31.79\\u00b13.46|\\n|motion_blur|**22.40\\u00b15.93**|29.13\\u00b11.16|**35.79\\u00b14.24**|29.44\\u00b17.69|31.49\\u00b15.44|31.93\\u00b12.75|\\n|pixelate|**24.24\\u00b16.78**|31.32\\u00b11.69|**38.56\\u00b14.42**|31.62\\u00b18.63|33.87\\u00b15.64|33.93\\u00b13.64|\\n|saturate|**11.64\\u00b14.90**|19.34\\u00b12.39|**26.10\\u00b14.28**|20.33\\u00b17.28|22.25\\u00b15.10|22.14\\u00b13.41|\\n|shot_noise|**22.07\\u00b16.28**|31.24\\u00b11.49|**36.61\\u00b14.64**|29.46\\u00b17.78|31.56\\u00b14.84|31.80\\u00b13.53|\\n|snow|**18.86\\u00b15.75**|28.61\\u00b11.55|**33.54\\u00b14.17**|27.04\\u00b17.64|29.07\\u00b14.92|28.58\\u00b13.76|\\n|spatter|**21.98\\u00b16.62**|30.63\\u00b11.62|**36.47\\u00b14.63**|29.54\\u00b18.20|31.66\\u00b15.31|31.82\\u00b13.57|\\n|speckle_noise|**21.60\\u00b16.00**|31.15\\u00b11.44|**36.48\\u00b14.50**|29.47\\u00b17.79|31.56\\u00b14.92|31.99\\u00b13.56|\\n|zoom_blur|**22.07\\u00b16.07**|28.46\\u00b10.98|**35.63\\u00b14.55**|29.33\\u00b17.30|30.97\\u00b15.18|31.73\\u00b12.70|\\n\\nWe haven\\u2019t fully investigated why Gaussian sparsity has such a dramatic impact on robustness yet. However, we believe this result lets us claim that our method is beneficial for **model robustness** along with the Large Kernel Problem.\\n\\n---\\n>**Re 3)** I do not agree that they are robust across architectures. The differences are not statistically significant in 3 of 5 architectures. Percent improvements between RSA and accuracy are not comparable, as they are not on the same scale. I did not criticize the absolute magnitude of your RSA values; I am aware of them often being small, especially in the mouse when compared to task-trained CNNs.\\n\\nFirst off, thank you for clarifying that your critique isn\\u2019t about whether our experiments were conducted properly, but rather about whether the statistical significance level of our results is strong enough to support our claims. That\\u2019s a really helpful distinction, and we appreciate you pointing it out.\\n\\nWe observed a mostly decreasing trend as p_init increased, and when we said \\u201crobust,\\u201d we meant that the trends were generally consistent across different conditions. However, as you rightly pointed out, the significance levels stand out primarily in AlexNet and ConvNeXt. \\n\\nThroughout previous revisions, we have also **included the neural activity prediction of V1 (Table 5)** using Lp-CNNs to support our claim. When we replaced the CNN in a CNN-GRU network (same model from previous studies) with our Lp-CNN, we found that Gaussian sparsity achieved the best performance. We think this result, combined with our RSA results, could support the idea that Gaussian sparsity allows CNNs to better align with the brain. We\\u2019d love to hear your thoughts on this\\u2014do you think it strengthens our argument, or is there something else we should consider?\\n\\n---\\nThank you again for taking the time to provide feedback. It means a lot to us. Again, it\\u2019s not just about getting higher scores\\u2014your constructive critiques are what truly help us improve. Please don\\u2019t hesitate to share more of your thoughts anytime. We deeply value them!\"}", "{\"comment\": \"Thank you for the detailed response. I remain unconvinced and will maintain my rating.\\n\\n1) I do see that it's a novel inductive bias and easily pluggable \\u2013 but to what end? What problem does it solve? Being biologically inspired is not an end in itself if it does not address an open problem.\\n\\n2) Again, what problem does this inductive bias solve? We have strong backbones that can be used for transfer learning and we have better methods for these toy tasks. If your inductive bias solves some data-scarce problem (better than previous methods, including transfer learning from backbones that exist and can be downloaded), I suggest you present the evidence. Table 4 is one example for super tiny effects, so I don't consider it support for your claim that your method is useful.\\n\\n3) I do not agree that they are robust across architectures. The differences are not statistically significant in 3 of 5 architectures. Percent improvements between RSA and accuracy are not comparable, as they are not on the same scale. I did not criticize the absolute magnitude of your RSA values; I am aware of them often being small, especially in the mouse when compared to task-trained CNNs.\"}", "{\"comment\": \"---\\n> **Question 1)** Evidence for Gaussian Sparsity as Biological Constraints\\n\\nThank you for raising this important question. Since Gaussian sparsity is a core assumption in our model as a biological constraint, it is essential to address this thoroughly. Below, we provide both theoretical and empirical evidence supporting Gaussian structured sparsity.\\n\\n### **1.Theoretical Evidence**\\n- ***Sparse Coding Theory***: Sparse Coding Theory posits that neural systems optimize sensory representations by minimizing redundancy. **Learning a sparse code** for natural images leads to the emergence of simple-cell receptive field properties [1]. This process can be linked with **Gaussian priors**, where synaptic weights follow a Gaussian distribution with most connections being weak and a few strong, promoting efficient information encoding [2].\\n\\n- ***Effective Receptive Field (ERF) Theory***: In convolutional neural networks, the actual influence of input pixels on an output neuron decreases in a **Gaussian manner from the center of the theoretical receptive field** [3]. This means that while the theoretical receptive field defines the maximum possible area of influence, the ERF is effectively smaller and Gaussian-shaped, with central pixels contributing most significantly to the neuron's output.\\n\\n### **2. Empirical Evidence**\\n- ***Supporting References***: These two references demonstrate both **anatomical and functional distribution of synapses** predominantly following a Gaussian-like distribution in the visual cortex [4-5]. \\n- ***Analysis Result***: We have demonstrated Gaussian distribution with the in vivo functional synapse data [5] in alive mouse V1 in **Appendix A.3**.\\n\\nThese foundations affirm Gaussian sparsity as a biologically plausible constraint within our Lp-convolution framework. **We will include this additional section in the manuscript** to reflect these points comprehensively. We thank the reviewer for their valuable feedback, which has enhanced the clarity and robustness of our manuscript.\\n\\n### **Refrences**\\n[1] Olshausen andField. \\\"Emergence of simple-cell receptive field properties by learning a sparse code for natural images.\\\" Nature (1996) \\n[2] Olshausen and Millman. \\\"Learning sparse codes with a mixture-of-Gaussians prior.\\\" NeurIPS (1999) \\n[3] Luo et al. \\\"Understanding the effective receptive field in deep convolutional neural networks.\\\" NeurIPS (2016) \\n[4] Hellwig et al. \\\"A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex.\\\" Biological cybernetics (2000) \\n[5] Rossi et al. \\\"Spatial connectivity matches direction selectivity in visual cortex.\\\" Nature (2020)\\n\\n---\\n> **Question 2)** What are the trends in p and their biological meaning?\\n\\nWe refer the reviewer to **Appendix A.16** for detailed analyses and provide the following summary. \\n\\n### **1. Distribution and Convergence of p**\\nThe learned p values, as shown in Appendix A.16.1 and A.16.2, are dispersed around distinct values such as 2, 4, 8, and 16 without overlapping. While these distributions vary slightly across datasets and model architectures, a general trend is observed: **p consistently converges towards smaller values** compared to its initialization, suggesting a decreasing tendency throughout training.\\n\\n### **2. Layer-wise Trend**\\nAppendix A.16.3 and A.16.4 further reveal layer-specific trends. In early layers, p predominantly decreases, likely reflecting the refinement of basic feature representations. Conversely, in late layers, p tends to increase, indicating enhanced integration of higher-level features and abstraction. These **contrasting trends align with the hierarchical processing patterns observed in neural networks and biological systems.**\\n\\n### **3. Biological Meaning** \\nThe convergence of p towards smaller values supports our hypothesis that **p learns to reinforce biological constraints.** This is consistent with effective receptive field theory and our findings in Figure 1, where sensory input receptive fields in trained models resemble Gaussian distributions, a natural phenomenon in biological sensory systems. However, as p remains sensitive to its initial values, ensuring convergence to globally optimal values remains a challenge, which we aim to address in future research.\"}", "{\"comment\": \"# Review Summary\\nWe sincerely appreciate the reviewers' time and effort in carefully evaluating our paper. Some comments were incredibly encouraging, acknowledging the value of our efforts so far, while others provided constructive critiques that highlighted expectations aligned with the high standards of ICLR. Addressing these comments has been invaluable in refining our understanding of the impact and utility of our work, as well as its position within the broader field. We briefly summarize the strengths and weaknesses pointed out by reviewers.\\n\\n1. Strengths\\n- Novel Idea on Biologically Inspired Inductive Bias **(Reviewer 7Z3f, nN4T, ZVMo)**\\n- Broad Applicability of Our Method **(Reviewer nN4T, ZVMo, ozim)**\\n- Comprehensive Experimental Validation and Robustness **(Reviewer 7Z3f, nN4T, ZVMo, ozim)**\\n- Clear Writing and Effective Visualizations **(Reviewer 7Z3f, ZVMo, nN4T)**\\n- Demonstration of Model Flexibility with Sudoku Challenge **(Reviewer nN4T, ZVMo, ozim)**\\n2. Weakness\\n- Not SoTA or Modest Improvements in Performance (**Reviewer 7Z3f, nN4T, ozim**)\\n- Limited Datasets and Tasks (**Reviewer 7Z3f, nN4T**)\\n- Weak Representational Similarity Analysis (**Reviewer 7Z3f, nN4T, ozim**)\\n- Unclear Relation to ViT Integration (**Reviewer nN4T, ozim**)\\n\\n# General Response\\nOur research represents an ambitious attempt to bridge neuroscience and AI, exploring whether **biologically-inspired inductive biases can not only enhance CNNs but also make them more aligned with the brain's mechanisms**. This paper is not the conclusion but the beginning\\u2014a foundational step toward conceptualizing and demonstrating this idea's potential.\\n\\nWe recognize the reviewers\\u2019 concerns about modest performance improvements, limited engagement with SoTA benchmarks, and the scope of tasks explored. These are valid limitations of our current study, and we appreciate the reviewers\\u2019 insights. However, the true value of our work lies in its **vision and foundation**: we have created an **easily pluggable convolutional module** that opens the door for future researchers to build on our findings and explore these possibilities further.\\n\\nMoreover, this research is more than performance numbers\\u2014it provides a unique contribution by showing that biologically-observed mechanisms, when tested in AI models, **reveal meaningful inductive biases critical to visual processing**. This aspect of our work contributes to a deeper understanding of how neuroscience can inform AI, offering a perspective that extends beyond conventional performance-focused computer vision research.\\n\\nWe sincerely hope that reviewers can see this work not as a final statement but as a first, necessary step\\u2014**a catalyst for deeper exploration at the intersection of neuroscience and AI**. With this in mind, we have carefully structured our rebuttal to address the reviewers' valuable feedback while staying true to the motivation and purpose behind this study.\"}", "{\"comment\": \"---\\n> **Weakness 2) Missing ViT Baselines in Robustness and Sudoku Benchmarks**\\n\\nThank you for pointing out the lack of ViT baselines for the Sudoku task. While we relied on a previously reported CNN model [1], we could not find ViT implementations specifically designed for this problem. Although Recurrent Transformer models, such as Yang et al. [2], achieve over 95% accuracy on textual Sudoku, no comparable ViT implementation exists for direct benchmarking.\\n\\nTo address this, we designed a custom ViT architecture with a final linear projection reshaped into[batch\\\\_size, 9, height, width], SudokuViT, tailored for Sudoku puzzles. Two variations were explored:\\n1. **SudokuViT3x3**: Processes 3 by 3 patches as sequences.\\n2. **SudokuViT1x1**: Treats each pixel as a token, effectively functioning as a standard Transformer.\\n\\nUnfortunately, SudokuViT3x3 struggled to train effectively, achieving only 27% number accuracy. SudokuViT1x1 showed slight improvement, with 49% number accuracy, but still performed poorly on Sudoku-specific metrics such as row-column accuracy and box accuracy.\\n\\n| Model | Loss | RowColAcc | BoxAcc | SudokuAcc | NumberAcc |\\n|---------------|--------|-----------|--------|-----------|-----------|\\n| SudokuViT3x3 | 1.753 | 0.08% | 0.03% | 0.00% | 27% |\\n| SudokuViT1x1 | 1.282 | 0.20% | 0.20% | 0.00% | 49% |\\n\\nThese results suggest that ViTs face significant challenges in capturing the grid-like structure and spatial relationships inherent to Sudoku puzzles, which CNNs handle effectively. However, we acknowledge that this performance could also stem from our inability to identify an effective training strategy or optimized model design for SudokuViT. Thus, it may not be appropriate to treat these results as definitive baselines for ViT performance on Sudoku.\\n\\nFor future work, we recommend referring to Yang et al. [2], where a Recurrent Transformer approach demonstrated strong performance on textual Sudoku, as a more suitable baseline for Transformer-based methods.\\n\\nWe appreciate the reviewer's suggestion, as it allowed us to explore the limitations and potential of ViT architectures for structured reasoning tasks like Sudoku. This exploration provides valuable insights into the architectural trade-offs between CNNs and Transformers in such domains.\\n\\n### **References**\\n[1] Oinar. \\u201cHow to solve sudoku with convolutional neural networks (cnn).\\u201d [GitHub link](https://github.com/chingisooinar/sudoku-solver.pytorch) (2021) \\n[2] Yang et al. \\\"Learning to solve constraint satisfaction problems with recurrent transformer.\\\"\\nICLR (2023)\\n\\n---\\n> **Weakness 3) Redundant Section on RSM Similarity Comparison**\\n\\nThank you for your valuable feedback on the RSM experiments. We recognize that the necessity of this section may not be immediately clear. However, one of the key goals of our study is to investigate whether integrating biologically observed inductive biases into CNNs not only enhances engineering performance **but also improves alignment with brain representations**.\\n\\nThe section 6 plays a critical role in demonstrating this connection. For example, biases inspired by the V1 (e.g., Gaussian sparsity, **Appendix A.3**) are shown to improve representational alignment with neural activity while simultaneously offering engineering benefits. This dual advantage underscores the value of biologically motivated constraints in informing model design.\\n\\nWe agree that further studies involving more datasets, brain regions, and CNN variations would strengthen our findings. As part of these efforts, we **demonstrated neural activity prediction experiments (Table 5)**, which provide additional evidence of the alignment between CNN representations and biological data.\\n\\nWe hope this clarification justifies the inclusion of Section 6, emphasizing its role in illustrating the complementary relationship between neuroscience and AI.\"}" ] }
0L8wZ9WRah
Attention-aware Post-training Quantization without Backpropagation
[ "Junhan Kim", "Ho-young Kim", "Eulrang Cho", "Chungman Lee", "Joonyoung Kim", "Yongkweon Jeon" ]
Quantization offers a promising solution for deploying large-scale language models (LLMs) on resource-constrained devices. However, early quantization methods, developed for smaller networks like ResNet, rely on gradient-based optimization, which becomes impractical for hyper-scale LLMs with billions of parameters. While recently proposed backpropagation-free post-training quantization (PTQ) methods alleviate this issue, their performance is limited by a lack of inter-layer dependency consideration. In this paper, we introduce a novel PTQ algorithm that incorporates inter-layer dependencies without relying on backpropagation. The key innovation is the development of attention-aware Hessian matrices that capture inter-layer interactions within the attention module. Extensive experiments demonstrate that our approach significantly outperforms conventional PTQ methods, particularly at low bit-widths.
[ "Quantization", "Hyper-scale LLMs", "Attention", "Hessian" ]
Reject
https://openreview.net/pdf?id=0L8wZ9WRah
https://openreview.net/forum?id=0L8wZ9WRah
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xUqRjFQNgV", "xU96bWE0KV", "vg84JchmIG", "usxLth2yjJ", "sE71lBVbrP", "qwyHPz4hmt", "q5trY7y8J7", "pqi8HaxZyx", "p0sO5RRM3p", "khci894E6L", "iGrzEHOwbD", "eWLgtfQdj2", "eDYCmydrLk", "dGbkuFE7g0", "b4oE6HRLrn", "ZSG4dDYjH2", "WIc4DRmX4T", "V2UPYMOLJP", "TASnun3yLx", "QohLTjKr3j", "QmHx6ibiiO", "NnufEVKu1i", "GSN6IkYDZP", "E8wwM6rYVq", "DWNT0QvOPW", "DP7cMSQzzp", "5Fq0L7txWG", "56uyxQsqqE" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734675827356, 1732504738929, 1732280598357, 1732012268648, 1730607930036, 1732869996411, 1732012232853, 1732880977292, 1730709889518, 1732012064132, 1732012024205, 1732260942723, 1732159232012, 1732012140828, 1732280358825, 1732213939832, 1732634479512, 1732012187836, 1732011982864, 1737523577503, 1732242056021, 1732708578873, 1732011818107, 1730205505002, 1730601454521, 1732505460792, 1732011865442, 1732503811860 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3463/Area_Chair_FSL8" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Reviewer_XkiM" ], [ "ICLR.cc/2025/Conference/Submission3463/Reviewer_kfPf" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Reviewer_gH6w" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Reviewer_RmBt" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Reviewer_gH6w" ], [ "ICLR.cc/2025/Conference/Submission3463/Reviewer_kfPf" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Reviewer_kfPf" ], [ "ICLR.cc/2025/Conference/Submission3463/Reviewer_RmBt" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ], [ "ICLR.cc/2025/Conference/Submission3463/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces BOA, a post-training quantization (PTQ) method for large language models (LLMs) that avoids backpropagation by leveraging attention-aware Hessian matrices to capture inter-layer dependencies within the attention module. BOA demonstrates improved quantization accuracy, particularly at low bit-widths (e.g., INT2), and incorporates techniques like Hessian relaxation and head-wise simultaneous quantization to reduce computational overhead. While reviewers commend the paper's focus on resource-efficient quantization for LLMs and its compatibility with other techniques like SmoothQuant and Z-FOLD, they find the contributions incremental and the novelty limited. The experiments are primarily conducted on outdated models (e.g., BLOOM, LLaMA1, OPT) and lack validation on state-of-the-art models or comparisons with recent quantization methods (e.g., QuaRot, SpinQuant). Additionally, BOA\\u2019s processing and memory overheads remain higher than some existing methods, while its performance improvements are marginal. Overall, reviewers recognize the importance of the topic but suggest that stronger experimental validation and comparisons are needed to make BOA a more compelling contribution.\", \"additional_comments_on_reviewer_discussion\": \"The reviewer believed that the paper's positioning and contributions remain unconvincing, and he authors overlook critical benchmarks by focusing comparisons primarily on GPTQ while distancing itself from established PTQ methods like SpinQuant, AffineQuant, and OmniQuant.\"}", "{\"comment\": \"Dear Reviewer RmBt\\n\\nThanks for your time you dedicated to reviewing our paper!\\n\\nYou were concerned about our paper's positioning and contributions, and accuracy and efficiency improvement over existing transformation-based PTQ methods.\", \"we_think_our_main_rebuttal_addresses_these_concerns_due_to_the_following_reasons\": [\"We have emphasized that the proposed method aims to **optimize integer weights by capturing inter-layer dependencies without relying on time-intensive, gradient-based optimization**. We have also noted that the proposed method is **orthogonal to existing transformation-based PTQ methods** and **can be integrated with them**. Our results in Tables 5 and 6 and Table I (in the main rebuttal) demonstrate that the proposed method can be used to boost the performance of existing transformation-based methods such as SmoothQuant, Z-Fold, and QuaRot.\", \"Our results in Table 6 and Table II (in the main rebuttal) clearly demonstrate that the proposed BoA significantly outperforms transformation-based methods such as OmniQuant and AffineQuant. For example, **the perplexity of OmniQuant and AffineQuant is larger than $10^{3}$ in some cases while the proposed BoA exhibits reasonable perplexity across all sizes of models** (see Table 6). Furthermore, OmniQuant and AffineQuant suffer from an unstable quantization process, i.e., **the training loss diverges** (see NaN in Table 6).\", \"Our results in Table 13(a) clearly demonstrate that the proposed method completes quantization faster than OmniQuant and AffineQuant. In particular, **AffineQuant needs 18.41 hours and 44.25 hours for quantizing 13B and 30B models, respectively, while the proposed method can finish quantization in 5 hours and 11 hours, respectively.**\", \"If you have any further concerns, please let us know. If not, we would be very grateful if you were to consider increasing your score.\"]}", "{\"comment\": \"**3. Efficiency over OmniQuant and AffineQuant**\\n - In Appendix C.4, we compared the quantization processing times of the proposed BoA, OmniQuant, and AffineQuant. We note that we used the same number of GPUs and the same amount of calibration data across all quantization methods.\\n - From Table 13(a), we observe that although OmniQuant and AffineQuant do not optimize integer weights, their processing time is still longer than that required by the proposed method because they rely on time-intensive gradient-based optimization. In particular, AffineQuant requires 4 times longer processing time than that required by the proposed method; **for example, AffineQuant needs 18.41 hours and 44.25 hours for quantizing 13B and 30B models, respectively, while the proposed method can finish quantization in 5 hours and 11 hours, respectively.**\\n\\nDue to such improvement in accuracy and efficiency together with the compatibility with existing transformation-based methods, we believe that the contribution of this work is meaningful and valuable. We hope for the reviewer's kind evaluation and recognition of our effort.\\n\\n<List of references>\\n\\n[1] G. Xiao et. al., \\\"SmoothQuant: Accurate and efficient post-training quantization for large language models,\\\" ICML 2023.\\n\\n[2] J. Chee et. al., \\\"QuIP: 2-bit quantization of large language models with guarantees,\\\" NeurIPS 2023.\\n\\n[3] Y. Jeon et. al., \\\"A frustratingly easy post-training quantization scheme for LLMs,\\\" EMNLP 2023.\\n\\n[4] W. Shao et. al., \\\"OmniQuant: Omnidirectionally calibrated quantization for large language models,\\\" ICLR 2024.\\n\\n[5] Y. Ma et. al., \\\"AffineQuant: Affine transformation quantization for large language models,\\\" ICLR 2024.\\n\\n[6] S. Ashkboos et. al., \\\"QuaRot: Outlier-free 4-bit inference in rotated LLMs,\\\" NeurIPS 2024.\\n\\n[7] Z. Liu et. al., \\\"SpinQuant: LLM quantization with learned rotations,\\\" arXiv 2024.\"}", "{\"comment\": \"**2. Comprehensive comparisons of quantization times. Although the paper introduces a training-free PTQ method, it may be slower than training-based methods. For example, Table 2 shows that BoA takes 1 hour to quantize 2.7B models, while GPTQ quantizes larger 13B models in only 21 minutes. OmniQuant, a training-based method, requires only 1.1 hours for 7B models.**\\n\\n - We appreciate the reviewer's comments. In Appendix C.4, we have compared the quantization processing times of the proposed BoA, the conventional training-free method (GPTQ), and training-based methods (OmniQuant and AffineQuant). We note that the processing times of GPTQ and OmniQuant are longer than those reported in the original papers [1], [2] due to the following reasons:\\n\\n - For GPTQ, we set quantization parameters (scale and zero-point) to minimize the layer-wise reconstruction error (see line 364), which requires a grid search [Section 3.1, 3]. It should be noted that the naive Min-Max-based quantization parameters used in the original GPTQ paper [1] can accelerate the quantization process but result in extremely worse low-bit quantization performance (perplexity is larger than $10^{3}$; see [Table 4, 3]).\\n \\n - For OmniQuant, the authors reported the processing time of training the model for 20 epochs [Table A12, 2]. However, for the 2-bit quantization, they actually performed training for 40 epochs [Section 4.1, 2]. Therefore, we measure the quantization processing time required to conduct training for 40 epochs.\\n\\n - As Table 13 in Appendix C.4 shows, GPTQ requires a shorter processing time and a smaller amount of memory than those required by the proposed BoA. This is because GPTQ quantizes all the rows of the weight matrix simultaneously by assuming independence between different layers. In contrast, BoA sequentially quantizes sub-weight matrices (see Figure 1(b)) to consider the inter-layer dependencies within the attention module, which eventually leads to significantly better quantization performance than GPTQ (at least 8% improvement in zero-shot accuracy; see Table I above). Clearly, there is a trade-off between quantization speed / memory cost and accuracy. In real situations, when one needs to preserve the performance of the original model as much as possible, the proposed BoA would be an intriguing solution (see Table I above). It should be noted that such additional processing time is imposed only during the quantization step, and the real inference time of quantized models obtained by BoA is exactly the same as that of GPTQ.\\n\\n - We emphasize that the proposed BoA performs significantly better than existing training-based approaches (e.g., OmniQuant and AffineQuant), yet facilitates faster quantization (see Tables 6 and 13 in the main text). We note that OmniQuant does not take too much time for quantizing large LLMs even though it performs gradient-based optimization. This is because OmniQuant reduces the number of learnable parameters greatly to accelerate gradient-based optimization. Specifically, instead of optimizing integer weights (which requires a large number of learnable parameters), OmniQuant relies on naive nearest-rounding for assigning integer weights. It learns only a small number of quantization parameters and certain parameters related to the model transformation. In doing so, the quantization process can be accelerated, but OmniQuant suffers from an unstable quantization process or collapses for low-bit quantization (see Table 6 in the main text). It is worth noting that AffineQuant improves OmniQuant by introducing additional learnable parameters [4]. While this enhances quantization performance, it incurs huge processing time compared to the proposed BoA (4 times longer processing time; see Table 13(a)), which demonstrates the inefficiency of training-based approaches over the proposed method.\\n\\n<List of references>\\n\\n[1] E. Frantar et. al., \\\"GPTQ: Accurate post-training quantization for generative pre-trained Transformers,\\\" ICLR 2023.\\n\\n[2] W. Shao et. al., \\\"OmniQuant: Omnidirectionally calibrated quantization for large language models,\\\" ICLR 2024.\\n\\n[3] Y. Jeon et. al., \\\"A frustratingly easy post-training quantization scheme for LLMs,\\\" EMNLP 2023.\\n\\n[4] Y. Ma et. al., \\\"AffineQuant: Affine transformation quantization for large language models,\\\" ICLR 2024.\"}", "{\"summary\": \"The paper introduced the BOA post-training quantization algorithm designed for LLMs that overcomes the limitations of traditional quantization methods, which struggle with inter-layer dependencies and backpropagation requirements in LLMs. BOA leveraged attention-aware Hessian matrices to better capture inter-layer interactions within the attention module, enhancing performance, especially at low bit-widths. The algorithm employed Hessian relaxation and head-wise simultaneous quantization, to attempt to reduce computational and memory costs, making it feasible for quantizing LLMs without backpropagation.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The topic of this paper is of significant importance and represents one of the most active and rapidly evolving research areas in the field. As LLMs grow increasingly complex, their deployment on resource-constrained devices requires innovative solutions to reduce computational and memory demands. Quantization, as a compression technique, has gained considerable traction for enabling efficient deployment of LLMs without sacrificing model accuracy.\", \"weaknesses\": \"The technical approach of this paper is relatively straightforward, lacking intricate or highly novel methodologies. Additionally, certain English terminology within the paper is used imprecisely, which may affect clarity and readability. The comparison methods are somewhat limited, providing a narrow benchmark for evaluating the proposed technique. Moreover, while the experimental results demonstrate some improvements, the advantage over existing methods is not substantial, suggesting the need for further validation, such as, SmoothQuant, LLMC,QuIP etc.\", \"questions\": \"The advantage over existing methods is not substantial, suggesting the need for further validation, such as, SmoothQuant, LLMC,QuIP etc.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The reported AutoRound results are worse than OmniQuant.\\n\\nHowever, as shown in the official open-source repo of AutoRound [https://github.com/intel/auto-round/blob/main/docs/acc.md](https://github.com/intel/auto-round/blob/main/docs/acc.md), we can see that AutoRound surpass OmniQuant with a large margin, such as +8% Acc. in w2g128 llama-2-7B. I suggest to reuse these results for comparisons.\"}", "{\"comment\": \"We appreciate the reviewer's valuable comments and constructive suggestions on our work.\\nOur point-to-point response is as follows.\\nPlease refer to the end of our final response for the list of references.\\n\\n**1. Additional experiments on newer models such as LLaMA-2 and LLaMA-3 & Group-wise quantization results**\\n\\n - We appreciate the reviewer's suggestion. The main reason why we utilized OPT and BLOOM models in our validation is that the performance comparison on various sizes of models (from 125M to 30B) is possible.\\n\\n - As suggested, we have quantized recent LLaMA2 and LLaMA3 models using the proposed BoA and GPTQ (see Table I below). We have also included group-wise quantization results, as suggested by the reviewer. As evident, the proposed BoA uniformly outperforms GPTQ by a significant margin. For almost all quantization configurations, BoA achieves at least 8% improvement over GPTQ in the zero-shot accuracy performance. In particular, the 2-bit quantized \\\"LLaMA2-7B\\\" model obtained by BoA even performs better than the \\\"LLaMA2-13B\\\" model quantized with GPTQ, even applied with group-wise quantization parameters (see the performance of W2G256 LLaMA2-13B obtained by GPTQ).\\n\\n<Table I. Quantization performance of BoA and GPTQ on LLaMA2 and LLaMA3 models transformed via QuaRot. 'GN' means that quantization has been applied to groups of N consecutive weights.>\\n\\n(a) Perplexity ($\\\\downarrow$)\\n\\n|Model|Precision|Method|Wiki2|C4|\\n|-|-|-|-|-|\\n|LLaMA2-7B|W2|GPTQ|39.56|47.37|\\n|||**BoA**|**14.77**|**18.41**|\\n||W2G256|GPTQ|37.63|43.46|\\n|||**BoA**|**13.41**|**16.80**|\\n||W2G64|GPTQ|29.38|36.77|\\n|||**BoA**|**11.63**|**14.68**|\\n||W2G16|GPTQ|13.75|17.22|\\n|||**BoA**|**8.880**|**11.33**|\\n|LLaMA2-13B|W2|GPTQ|21.89|27.48|\\n|||**BoA**|**11.93**|**18.14**|\\n||W2G256|GPTQ|15.17|19.24|\\n|||**BoA**|**10.47**|**13.71**|\\n||W2G64|GPTQ|13.15|17.09|\\n|||**BoA**|**9.116**|**11.99**|\\n||W2G16|GPTQ|9.819|13.28|\\n|||**BoA**|**7.231**|**9.589**|\\n|LLaMA3-8B|W2|GPTQ|40.30|51.92|\\n|||**BoA**|**23.50**|**31.47**|\\n||W2G256|GPTQ|34.65|43.50|\\n|||**BoA**|**21.41**|**29.09**|\\n||W2G64|GPTQ|25.83|36.04|\\n|||**BoA**|**17.85**|**24.81**|\\n||W2G16|GPTQ|15.81|22.98|\\n|||**BoA**|**13.00**|**18.90**|\\n\\n(b) Zero-shot accuracy ($\\\\uparrow$)\\n\\n|Model|Precision|Method|ARC-c|ARC-e|HellaSwag|Average|\\n|-|-|-|-|-|-|-|\\n|LLaMA2-7B|W2|GPTQ|22.61|34.81|33.56|30.33|\\n|||**BoA**|30.89|55.05|51.22|**45.72**|\\n||W2G256|GPTQ|20.99|35.61|31.89|29.50|\\n|||**BoA**|29.61|55.01|52.72|**45.78**|\\n||W2G64|GPTQ|24.40|39.02|36.12|33.18|\\n|||**BoA**|33.02|59.85|55.68|**49.52**|\\n||W2G16|GPTQ|27.22|50.21|51.81|43.08|\\n|||**BoA**|36.43|63.47|63.13|**54.34**|\\n|LLaMA2-13B|W2|GPTQ|25.60|39.31|38.27|34.39|\\n|||**BoA**|31.31|58.38|53.07|**47.59**|\\n||W2G256|GPTQ|29.44|51.43|49.58|43.48|\\n|||**BoA**|35.67|62.04|59.32|**52.34**|\\n||W2G64|GPTQ|29.78|50.55|50.59|43.64|\\n|||**BoA**|37.54|64.39|62.45|**54.79**|\\n||W2G16|GPTQ|35.41|60.10|59.73|51.75|\\n|||**BoA**|42.32|69.78|69.04|**60.38**|\\n|LLaMA3-8B|W2|GPTQ|24.15|40.24|42.96|35.78|\\n|||**BoA**|29.86|51.68|51.85|**44.46**|\\n||W2G256|GPTQ|26.19|45.37|42.57|38.04|\\n|||**BoA**|30.46|55.26|52.61|**46.11**|\\n||W2G64|GPTQ|29.01|49.37|45.94|41.44|\\n|||**BoA**|32.94|59.64|54.95|**49.18**|\\n||W2G16|GPTQ|34.39|61.91|58.29|51.53|\\n|||**BoA**|39.85|65.74|63.17|**56.25**|\"}", "{\"comment\": \"We appreciate the reviewer's comments.\\n\\n - We kindly ask the reviewer to see the _**official results in Table 14 of the original AutoRound paper**_. For the setting that the reviewer pointed out (W2G128 quantization of LLaMA2-7B), _**AutoRound's perplexity performance has been reported as NaN (which means that the training loss diverges),**_ which contradicts the results that the reviewer mentioned. For this reason, we thought that the reported results were strange, so we ran the official AutoRound code and reported the obtained results. \\n\\n - We think that the performance of AutoRound varies significantly with random seeds used to sample calibration data, as can be observed in the table below. As mentioned by the reviewer, _**AutoRound can occasionally outperform OmniQuant (seed 0), but AutoRound shows inferior performance in most cases and even collapses (i.e., perplexity is 96.78) for seed 500**_. \\n\\n- In contrast, the proposed BoA exhibits much smaller variation with different calibration data, that is, _**the standard deviation (stdev) of BoA is only 0.045 while stdev of AutoRound is 30.55**_. Furthermore, BoA uniformly outperforms AutoRound for all different seeds, which clearly demonstrates the outstanding and stable performance of BoA.\\n\\nWe hope the reviewer finds our response satisfactory. If you have any further concerns, please let us know.\\n\\n<Table. W2G128 performance of AutoRound and the proposed BoA on LLaMA2-7B (perplexity ($\\\\downarrow$) for WikiText-2)>\\n\\n|Method|Seed|0|10|20|100|200|500|1000|Average $\\\\pm$ Stdev|\\n|-|-|-|-|-|-|-|-|-|-|\\n|AutoRound||10.00|15.25|21.58|12.82|24.59|96.78|17.98|**28.43 $\\\\pm$ 30.55**|\\n|**BoA**||9.777|9.747|9.747|9.707|9.708|9.647|9.676|**9.716 $\\\\pm$ 0.045**|\"}", "{\"summary\": \"This paper presents a novel post-training quantization (PTQ) method, termed BOA (Backpropagation-free Optimization for Attention-aware PTQ), targeting large language models (LLMs) without relying on backpropagation. The approach introduces attention-aware Hessian matrices that capture inter-layer dependencies within the attention module, aiming to improve quantization accuracy, especially at low bit-widths (e.g., INT2). BOA incorporates techniques like Hessian relaxation and efficient computation of inverse Hessians to mitigate the high computational costs. The method is benchmarked against existing PTQ approaches on LLMs, demonstrating improved performance in terms of perplexity and zero-shot task accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed BOA consider inter-layer dependencies within the attention module when optimize a weight-rounding mechanism. It is beneficial to maintain higher quantization accuracy, especially at low-bit precision.\\n\\n2. The proposed BOA method demonstrates impressive results, especially in the low-bit regime (e.g., INT2 quantization).\\n\\n3. The paper includes extensive experiments across multiple model types and sizes, demonstrating scalability across LLMs of different parameter counts.\", \"weaknesses\": \"1. Novelty Limitations: The primary contribution, the attention-aware Hessian matrix, is an incremental improvement over existing Hessian-based PTQ methods. While capturing inter-layer dependencies within the attention module is beneficial, the idea is not a novel quantization paradigm.\\n\\n2. The authors introduce optimizations approaches like Hessian relaxation and efficient computation of inverse Hessians, but the results did not show the effect of these optimization methods.\", \"questions\": \"Refer to 2 in weakness. What is the effectiveness of proposed approaches in terms of efficiency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**3. Straightforward approach, lacking intricate or highly novel methodologies.**\\n\\n - We appreciate the reviewer's comment. As the reviewers RmBt and kfPf acknowledged, we believe that our contribution is innovative in the sense that the proposed BoA is the first quantization method that attempts to capture inter-layer dependencies **without backpropagation**.\\n\\n - While it is well-known that capturing inter-layer dependencies is beneficial for quantization, all the existing works rely on time-consuming gradient-based optimization [4], [5], [8], which would not be suitable for real-world deployment where models to be deployed are frequently updated and multiple times of hyper-parameter searches are needed. Indeed, the first PTQ method that attempts to capture inter-layer dependencies (called BRECQ [8]) needs more than 10 hours even for relatively small-sized models (e.g., OPT-1.3B), and requires multiple GPU resources to quantize LLMs having more than 7B parameters.\\n\\n - Recently, OmniQuant [4] and AffineQuant [5] accelerated the quantization processing time by learning only a small number of quantization parameters (scale and zero-point) and certain parameters related to the model transformation. However, they suffer from an unstable quantization process due to the gradient approximation involved in the quantization parameter learning and sacrifice the low-bit performance because they apply the naive nearest-rounding when assigning integer weights (see Table 6). Furthermore, although OmniQuant and AffineQuant do not optimize integer weights, their processing time is still longer (e.g., 4 times longer for AffineQuant) than that required by the proposed BoA (see Table 13(a)).\\n\\n - To avoid the aforementioned disadvantages, we established the attention-aware Hessians, which is the first work to consider inter-layer dependencies while circumventing gradient-based optimization. Moreover, we presented several relaxation techniques, without which multiple GPU resources are required and the quantization cannot be done in a reasonable processing time. Due to these reasons, we believe that the contribution of this work is meaningful and valuable. We hope the reviewer's kind evaluation and acknowledgment on our effort to develop a practical quantization solution that captures inter-layer dependencies.\\n\\n<List of references>\\n\\n[1] G. Xiao et. al., \\\"SmoothQuant: Accurate and efficient post-training quantization for large language models,\\\" ICML 2023.\\n\\n[2] J. Chee et. al., \\\"QuIP: 2-bit quantization of large language models with guarantees,\\\" NeurIPS 2023.\\n\\n[3] Y. Jeon et. al., \\\"A frustratingly easy post-training quantization scheme for LLMs,\\\" EMNLP 2023.\\n\\n[4] W. Shao et. al., \\\"OmniQuant: Omnidirectionally calibrated quantization for large language models,\\\" ICLR 2024.\\n\\n[5] Y. Ma et. al., \\\"AffineQuant: Affine transformation quantization for large language models,\\\" ICLR 2024.\\n\\n[6] S. Ashkboos et. al., \\\"QuaRot: Outlier-free 4-bit inference in rotated LLMs,\\\" NeurIPS 2024.\\n\\n[7] R. Gong et. al., \\\"LLMC: Benchmarking large language model quantization with a versatile compression toolkit,\\\" EMNLP 2024.\\n\\n[8] Y. Li et. al., \\\"BRECQ: Pushing the limit of post-training quantization by block reconstruction,\\\" ICLR 2021.\"}", "{\"comment\": \"**2. Advantage over existing methods is not substantial.**\\n\\n - We appreciate the reviewer's comment. For a more thorough comparison, we have quantized recent LLaMA2 and LLaMA3 models with the proposed BoA and GPTQ (see Table II below). We also include the group-wise quantization results, as suggested by the reviewer kfPf. As evident, the proposed BoA uniformly outperforms GPTQ for all models. For almost all quantization configurations, BoA achieves at least 8% improvement over GPTQ in the zero-shot accuracy performance. In particular, the 2-bit quantized \\\"LLaMA2-7B\\\" model obtained by BoA even performs better than the \\\"LLaMA2-13B\\\" model quantized with GPTQ, even applied with group-wise quantization parameters (see the performance of W2G256 LLaMA2-13B obtained by GPTQ). In this sense, we believe the improvement over GPTQ is not marginal.\\n\\n<Table II. Quantization performance of BoA and GPTQ on LLaMA2 and LLaMA3 models transformed via QuaRot. 'GN' means that quantization has been applied to groups of N consecutive weights.>\\n\\n(a) Perplexity ($\\\\downarrow$)\\n\\n|Model|Precision|Method|Wiki2|C4|\\n|-|-|-|-|-|\\n|LLaMA2-7B|W2|GPTQ|39.56|47.37|\\n|||**BoA**|**14.77**|**18.41**|\\n||W2G256|GPTQ|37.63|43.46|\\n|||**BoA**|**13.41**|**16.80**|\\n||W2G64|GPTQ|29.38|36.77|\\n|||**BoA**|**11.63**|**14.68**|\\n||W2G16|GPTQ|13.75|17.22|\\n|||**BoA**|**8.880**|**11.33**|\\n|LLaMA2-13B|W2|GPTQ|21.89|27.48|\\n|||**BoA**|**11.93**|**18.14**|\\n||W2G256|GPTQ|15.17|19.24|\\n|||**BoA**|**10.47**|**13.71**|\\n||W2G64|GPTQ|13.15|17.09|\\n|||**BoA**|**9.116**|**11.99**|\\n||W2G16|GPTQ|9.819|13.28|\\n|||**BoA**|**7.231**|**9.589**|\\n|LLaMA3-8B|W2|GPTQ|40.30|51.92|\\n|||**BoA**|**23.50**|**31.47**|\\n||W2G256|GPTQ|34.65|43.50|\\n|||**BoA**|**21.41**|**29.09**|\\n||W2G64|GPTQ|25.83|36.04|\\n|||**BoA**|**17.85**|**24.81**|\\n||W2G16|GPTQ|15.81|22.98|\\n|||**BoA**|**13.00**|**18.90**|\\n\\n(b) Zero-shot accuracy ($\\\\uparrow$)\\n\\n|Model|Precision|Method|ARC-c|ARC-e|HellaSwag|Average|\\n|-|-|-|-|-|-|-|\\n|LLaMA2-7B|W2|GPTQ|22.61|34.81|33.56|30.33|\\n|||**BoA**|30.89|55.05|51.22|**45.72**|\\n||W2G256|GPTQ|20.99|35.61|31.89|29.50|\\n|||**BoA**|29.61|55.01|52.72|**45.78**|\\n||W2G64|GPTQ|24.40|39.02|36.12|33.18|\\n|||**BoA**|33.02|59.85|55.68|**49.52**|\\n||W2G16|GPTQ|27.22|50.21|51.81|43.08|\\n|||**BoA**|36.43|63.47|63.13|**54.34**|\\n|LLaMA2-13B|W2|GPTQ|25.60|39.31|38.27|34.39|\\n|||**BoA**|31.31|58.38|53.07|**47.59**|\\n||W2G256|GPTQ|29.44|51.43|49.58|43.48|\\n|||**BoA**|35.67|62.04|59.32|**52.34**|\\n||W2G64|GPTQ|29.78|50.55|50.59|43.64|\\n|||**BoA**|37.54|64.39|62.45|**54.79**|\\n||W2G16|GPTQ|35.41|60.10|59.73|51.75|\\n|||**BoA**|42.32|69.78|69.04|**60.38**|\\n|LLaMA3-8B|W2|GPTQ|24.15|40.24|42.96|35.78|\\n|||**BoA**|29.86|51.68|51.85|**44.46**|\\n||W2G256|GPTQ|26.19|45.37|42.57|38.04|\\n|||**BoA**|30.46|55.26|52.61|**46.11**|\\n||W2G64|GPTQ|29.01|49.37|45.94|41.44|\\n|||**BoA**|32.94|59.64|54.95|**49.18**|\\n||W2G16|GPTQ|34.39|61.91|58.29|51.53|\\n|||**BoA**|39.85|65.74|63.17|**56.25**|\"}", "{\"title\": \"I maintain my concerns about this paper's positioning and contributions.\", \"comment\": \"I maintain my concerns about this paper's positioning and contributions. The authors appear to deliberately distance themselves from established PTQ methods like SpinQuant, AffineQuant, and OmniQuant, choosing to compare primarily with GPTQ. This approach is problematic for several reasons:\\n\\n1.Regarding efficiency, PTQ methods have gained widespread adoption precisely because of their minimal computational and data requirements. The authors themselves acknowledge that methods like OmniQuant require few tuning parameters. The paper fails to demonstrate significant advantages in this crucial aspect of quantization.\\n\\n2.Regarding accuracy, the transformations employed in the referenced PTQ methods are invertible (e.g., affine transformations in SpinQuant and AffineQuant, scale-shift transformations in OmniQuant). These transformations can theoretically be absorbed into subsequent layers - for instance, if Layer 1 applies a transformation to W, Layer 2 can apply an inverse transformation to maintain distributional consistency. This mathematical property contributes to PTQ's strong generalization capabilities, which has led to its widespread adoption in both industry and academia.\\n\\nGiven these considerations, I fail to see substantial advantages or meaningful differentiation in the proposed method. The paper does not present convincing evidence to support its claims of superiority over existing approaches. Therefore, I maintain my original assessment and score.\"}", "{\"title\": \"Look forward to further discussions\", \"comment\": \"Dear reviewers,\\n\\nWe sincerely appreciate the reviewers' valuable feedback and constructive suggestions. \\n\\nWe have made a thorough effort to address all the concerns raised. \\n\\nBelow is a summary of the key points in our responses.\\n\\n**1. Additional Experimental Results**\\n \\n - We have included \\n - results for recent language models such as LLaMA2 and LLaMA3\\n - integration results with the recent transformation method QuaRot [1]\\n - group-wise quantization results\\n - Overall, we have achieved **at least 8% improvement** in accuracy across almost all quantization configurations, which we believe is a significant advancement.\\n\\n**2. Emphasis on Our Contribution**\\n - While two reviewers acknowledged the innovation of our method, two others expressed concerns about its novelty.\\n - The core novelty of our work lies in capturing inter-layer dependencies **without relying on backpropagation**, unlike conventional methods (e.g., BRECQ [2], OmniQuant [3], AffineQuant [4]) that depend on time-intensive, gradient-based optimization.\\n\\n**3. Comprehensive Comparisons of Quantization Processing Times**\\n - We have compared the quantization processing times of the proposed method, the conventional training-free method (GPTQ [5]), and training-based methods (OmniQuant [3] and AffineQuant [4]).\\n - Compared to existing training-based approaches, the proposed method enables **faster** quantization while delivering **significantly better** performance.\\n - Although the proposed method requires more processing time than GPTQ due to the consideration of inter-layer dependencies, this consideration results in significantly improved performance, with **over 8% improvement**.\\n\\nFor more details, we kindly invite the reviewers to refer to our point-by-point responses.\\n\\nOnce again, we are grateful to the reviewers for their time to read and review our paper, and we look forward to further constructive discussions.\\n\\n[1] S. Ashkboos et. al., \\\"QuaRot: Outlier-free 4-bit inference in rotated LLMs,\\\" NeurIPS 2024.\\n\\n[2] Y. Li et. al., \\\"BRECQ: Pushing the limit of post-training quantization by block reconstruction,\\\" ICLR 2021.\\n\\n[3] W. Shao et. al., \\\"OmniQuant: Omnidirectionally calibrated quantization for large language models,\\\" ICLR 2024.\\n\\n[4] Y. Ma et. al., \\\"AffineQuant: Affine transformation quantization for large language models,\\\" ICLR 2024.\\n\\n[5] E. Frantar et. al., \\\"GPTQ: Accurate post-training quantization for generative pre-trained Transformers,\\\" ICLR 2023.\"}", "{\"comment\": \"We appreciate the reviewer's valuable comments and constructive suggestions on our work.\\nOur point-to-point response is as follows.\\n\\n**1. Validation on more recent models such as LLaMA3 & Integration with more recent quantization methods such as QuaRot**\\n\\n - We appreciate the reviewer's suggestion. The main reason why we utilized OPT and BLOOM models in our validation is that the performance comparison on various sizes of models (from 125M to 30B) is possible.\\n\\n - As suggested, we have quantized the recent LLaMA3-8B model with the proposed BoA and the conventional GPTQ. Furthermore, to check the compatibility with the recent transformation method, we have transformed LLaMA3-8B via QuaRot and then measured the quantization performance on the transformed LLaMA3-8B model (see Table I below). We observe that both BoA and GPTQ perform better when QuaRot is applied. As evident, the proposed BoA uniformly performs better than GPTQ. In particular, when QuaRot has been applied, BoA outperforms GPTQ by a significant margin (9% improvement in the zero-shot accuracy).\\n\\n<Table I. INT2 quantization performance of BoA and GPTQ on LLaMA3-8B>\\n\\n(a) Perplexity ($\\\\downarrow$)\\n|Transformation|Method|Wiki2|C4|\\n|-|-|-|-|\\n|None|GPTQ|76.77|54.50|\\n||**BoA**|**71.75**|**46.04**|\\n|QuaRot|GPTQ|40.30|51.92|\\n||**BoA**|**23.50**|**31.47**|\\n\\n(b) Zero-shot accuracy ($\\\\uparrow$)\\n|Transformation|Method|ARC-c|ARC-e|HellaSwag|Average|\\n|-|-|-|-|-|-|\\n|None|GPTQ|20.65|32.66|44.00|32.44|\\n||**BoA**|22.70|35.73|47.37|**35.27**|\\n|QuaRot|GPTQ|24.15|40.24|42.96|35.78|\\n||**BoA**|29.86|51.68|51.85|**44.46**|\\n\\n**2. Marginal improvement over GPTQ**\\n\\n - We appreciate the reviewer's comment. For a more thorough comparison, we have quantized recent LLaMA2 and LLaMA3 models with the proposed BoA and GPTQ (see Table II below). We also include the group-wise quantization results, as suggested by the reviewer kfPf. As evident, the proposed BoA uniformly outperforms GPTQ for all models. For almost all quantization configurations, BoA achieves at least 8% improvement over GPTQ in the zero-shot accuracy performance. In particular, the 2-bit quantized \\\"LLaMA2-7B\\\" model obtained by BoA even performs better than the \\\"LLaMA2-13B\\\" model quantized with GPTQ, even applied with group-wise quantization parameters (see the performance of W2G256 LLaMA2-13B obtained by GPTQ). In this sense, we believe the improvement over GPTQ is not marginal. We hope the reviewer's kind evaluation and acknowledgment on our effort.\\n\\n<Table II. Quantization performance of BoA and GPTQ on LLaMA2 and LLaMA3 models transformed via QuaRot. 'GN' means that quantization has been applied to groups of N consecutive weights.>\\n\\n(a) Perplexity ($\\\\downarrow$)\\n\\n|Model|Precision|Method|Wiki2|C4|\\n|-|-|-|-|-|\\n|LLaMA2-7B|W2|GPTQ|39.56|47.37|\\n|||**BoA**|**14.77**|**18.41**|\\n||W2G256|GPTQ|37.63|43.46|\\n|||**BoA**|**13.41**|**16.80**|\\n||W2G64|GPTQ|29.38|36.77|\\n|||**BoA**|**11.63**|**14.68**|\\n||W2G16|GPTQ|13.75|17.22|\\n|||**BoA**|**8.880**|**11.33**|\\n|LLaMA2-13B|W2|GPTQ|21.89|27.48|\\n|||**BoA**|**11.93**|**18.14**|\\n||W2G256|GPTQ|15.17|19.24|\\n|||**BoA**|**10.47**|**13.71**|\\n||W2G64|GPTQ|13.15|17.09|\\n|||**BoA**|**9.116**|**11.99**|\\n||W2G16|GPTQ|9.819|13.28|\\n|||**BoA**|**7.231**|**9.589**|\\n|LLaMA3-8B|W2|GPTQ|40.30|51.92|\\n|||**BoA**|**23.50**|**31.47**|\\n||W2G256|GPTQ|34.65|43.50|\\n|||**BoA**|**21.41**|**29.09**|\\n||W2G64|GPTQ|25.83|36.04|\\n|||**BoA**|**17.85**|**24.81**|\\n||W2G16|GPTQ|15.81|22.98|\\n|||**BoA**|**13.00**|**18.90**|\\n\\n(b) Zero-shot accuracy ($\\\\uparrow$)\\n\\n|Model|Precision|Method|ARC-c|ARC-e|HellaSwag|Average|\\n|-|-|-|-|-|-|-|\\n|LLaMA2-7B|W2|GPTQ|22.61|34.81|33.56|30.33|\\n|||**BoA**|30.89|55.05|51.22|**45.72**|\\n||W2G256|GPTQ|20.99|35.61|31.89|29.50|\\n|||**BoA**|29.61|55.01|52.72|**45.78**|\\n||W2G64|GPTQ|24.40|39.02|36.12|33.18|\\n|||**BoA**|33.02|59.85|55.68|**49.52**|\\n||W2G16|GPTQ|27.22|50.21|51.81|43.08|\\n|||**BoA**|36.43|63.47|63.13|**54.34**|\\n|LLaMA2-13B|W2|GPTQ|25.60|39.31|38.27|34.39|\\n|||**BoA**|31.31|58.38|53.07|**47.59**|\\n||W2G256|GPTQ|29.44|51.43|49.58|43.48|\\n|||**BoA**|35.67|62.04|59.32|**52.34**|\\n||W2G64|GPTQ|29.78|50.55|50.59|43.64|\\n|||**BoA**|37.54|64.39|62.45|**54.79**|\\n||W2G16|GPTQ|35.41|60.10|59.73|51.75|\\n|||**BoA**|42.32|69.78|69.04|**60.38**|\\n|LLaMA3-8B|W2|GPTQ|24.15|40.24|42.96|35.78|\\n|||**BoA**|29.86|51.68|51.85|**44.46**|\\n||W2G256|GPTQ|26.19|45.37|42.57|38.04|\\n|||**BoA**|30.46|55.26|52.61|**46.11**|\\n||W2G64|GPTQ|29.01|49.37|45.94|41.44|\\n|||**BoA**|32.94|59.64|54.95|**49.18**|\\n||W2G16|GPTQ|34.39|61.91|58.29|51.53|\\n|||**BoA**|39.85|65.74|63.17|**56.25**|\"}", "{\"comment\": \"We sincerely appreciate the reviewer's further comments. Our point-to-point responses are as follows.\\n\\n**1. Positioning and contributions of our work** \\n - We kindly note that the proposed method is orthogonal to the transformation-based approaches that the reviewer mentioned. Specifically, recent PTQ methods for the LLM quantization can be classified into two orthogonal categories.\\n - methods that optimize integer weights based on approximated Hessian matrices (e.g., GPTQ)\\n - methods that transform a model into a more quantization-favorable form (e.g., SmoothQuant [1], QuIP [2], Z-Fold [3], OmniQuant [4], AffineQuant [5], QuaRot [6], and SpinQuant [7])\\n - We emphasize that the proposed method is the integer weight optimization method, so we chose to compare primarily with GPTQ. \\n - It is worth noting that, similar to GPTQ, which can be integrated with transformation-based methods [2], [3], [6], [7], the proposed approach can also be combined with existing techniques to improve their performance. Indeed, our results in Table 5 (in the main text) and Table I (see below) demonstrate that the quantization performance can be boosted by combining the proposed method with existing transformation methods such as SmoothQuant, Z-Fold, and QuaRot. We note that the reason why we chose SmoothQuant, Z-Fold, and QuaRot in our integration is because those methods do not rely on backpropagation, unlike other methods (OmniQuant, AffineQuant, and SpinQuant) that depend on gradient-based optimization.\\n\\n<Table I. INT2 quantization performance of BoA and GPTQ on LLaMA3-8B>\\n\\n(a) Perplexity ($\\\\downarrow$)\\n|Transformation|Method|Wiki2|C4|\\n|-|-|-|-|\\n|None|GPTQ|76.77|54.50|\\n||**BoA**|**71.75**|**46.04**|\\n|QuaRot|GPTQ|40.30|51.92|\\n||**BoA**|**23.50**|**31.47**|\\n\\n(b) Zero-shot accuracy ($\\\\uparrow$)\\n|Transformation|Method|ARC-c|ARC-e|HellaSwag|Average|\\n|-|-|-|-|-|-|\\n|None|GPTQ|20.65|32.66|44.00|32.44|\\n||**BoA**|22.70|35.73|47.37|**35.27**|\\n|QuaRot|GPTQ|24.15|40.24|42.96|35.78|\\n||**BoA**|29.86|51.68|51.85|**44.46**|\\n\\n**2. Accuracy improvement over existing transformation-based methods**\\n - We understand the benefits of existing transformation-based methods that the reviewer mentioned. \\n - However, transformation-based methods, such as OmniQuant and AffineQuant, perform significantly worse than the proposed method, especially for low-bit quantization (see Table 6), because they rely on the naive nearest-rounding when assigning integer weights. **For example, the perplexity of OmniQuant and AffineQuant is larger than $10^{3}$ in some cases while the proposed BoA exhibits reasonable perplexity across all sizes of models (see Table 6).**\\n - Moreover, **OmniQuant and AffineQuant suffer from an unstable quantization process** due to the gradient approximation involved in the quantization parameter learning **(see 'NaN' in Table 6)**.\\n - Furthermore, we have newly compared the quantization performance of the proposed method, OmniQuant, and AffineQuant on LLaMA2 models (see Table II below). We note that LLaMA3 is excluded from our comparison because the official codes of OmniQuant and AffineQuant do not support the quantization of LLaMA3 models. Our results in Table 6 and Table II validate that the proposed method outperforms OmniQuant and AffineQuant, with respect to both perplexity and zero-shot accuracy performance.\\n\\n<Table II. INT2 quantization performance of BoA, GPTQ, and existing transformation-based methods on LLaMA2 models. 'NaN' means that loss diverges in the quantization process>\\n\\n|Model|Method|Wiki2($\\\\downarrow$)|ARC-c|ARC-e|HellaSwag|Average($\\\\uparrow$)|\\n|-|-|-|-|-|-|-|\\n|LLaMA2-7B|GPTQ|39.56|22.61|34.81|33.56|30.33|\\n||OmniQuant|35.40|25.00|38.80|42.97|35.59|\\n||AffineQuant|NaN|NaN|NaN|NaN|NaN|\\n||**BoA**|**14.77**|30.89|55.05|51.22|**45.72**|\\n|LLaMA2-13B|GPTQ|21.89|25.60|39.31|38.27|34.39|\\n||OmniQuant|20.19|27.13|47.98|53.27|42.79|\\n||AffineQuant|18.49|30.80|52.90|57.74|47.15|\\n||**BoA**|**11.93**|31.31|58.38|53.07|**47.59**|\"}", "{\"title\": \"Thanks for the reply, I have raised my score.\", \"comment\": \"Thanks for the thorough explanation, I think the rebuttal has addressed my concern. Also after referring the response to other reviewers, I think the rebuttal is also convincing. So I decide to raise the Contribution to 3, Confidence to 4 and Overall rating to 6.\"}", "{\"comment\": \"The comparisons is not enough.\\n\\nThough the proposed method is training free, the quantization time compared to existing training-based methods (such as AutoRound [1], OmniQuant) is limited. Therefore, it should also include these methods into the comparison table, especially for the practical group-wise quantization.\\n\\n\\n[1] Optimize weight rounding via signed gradient descent for the quantization of llms\"}", "{\"comment\": [\"**3. BoA's actual overhead in terms of memory and processing time is greater than GPTQ.**\", \"As mentioned by the reviewer, GPTQ requires a shorter processing time and a smaller amount of memory than those required by the proposed BoA. This is because GPTQ quantizes all the rows of the weight matrix simultaneously by assuming independence between different layers. In contrast, BoA sequentially quantizes sub-weight matrices (see Figure 1(b)) to consider the inter-layer dependencies within the attention module, which eventually leads to significantly better quantization performance than GPTQ (at least 8% improvement in zero-shot accuracy; see Tables I and II above). It should be noted that such additional processing time is imposed only during the quantization step, and the real inference time of quantized models obtained by BoA is exactly the same as that of GPTQ.\", \"Clearly, there is a trade-off between quantization speed / memory cost and accuracy. In real situations, when one needs to preserve the performance of the original model as much as possible, the proposed BoA would be an intriguing solution (see Tables I and II above). Furthermore, we emphasize that the proposed BoA performs significantly better than existing gradient-based approaches (e.g., OmniQuant and AffineQuant), yet facilitates faster quantization (see Tables 6 and 13 in the main text).\", \"Even when the memory resource is limited, the proposed BoA can be used with some relaxation. Specifically, we note that the large memory cost of BoA for hyper-scale LLMs (e.g., 13B and 30B) is attributable to the row-wise Hessian for the value projection ($\\\\mathbf{X} \\\\mathbf{A} _{h} ^{T} \\\\mathbf{A} _{h} \\\\mathbf{X} ^{T}$; see Eq. (12)) whose shape is $H \\\\times d \\\\times d$ ($H$ is the number of attention heads and $d$ is the embedding dimension). In memory-limited cases, we can mitigate the memory cost of BoA by considering inter-layer dependencies only for query and key projections and applying the standard Hessian ($\\\\mathbf{X} \\\\mathbf{X} ^{T}$) for the value projection. Indeed, when considering only query and key projections, BoA requires almost same amount of memory as GPTQ, yet still exhibiting better performance (see Table 14 in the main text).\"]}", "{\"comment\": \"We appreciate the reviewer's valuable comments and constructive suggestions on our work.\\nOur point-to-point response is as follows.\\nPlease refer to the end of our final response for the list of references.\\n\\n**1. Limited comparison, suggesting the need for further validation such as SmoothQuant, LLMC, and QuIP**\\n\\n - We appreciate the reviewer's invaluable comments. First, we mention that the proposed method is orthogonal to the approaches that the reviewer mentioned. Specifically, recent LLM quantization methods can be classified into two orthogonal categories: \\n\\n - methods that optimize integer weights based on approximated Hessian matrices (e.g., GPTQ)\\n\\n - methods that transform a model into a more quantization-favorable form (e.g., SmoothQuant [1], QuIP [2], Z-Fold [3], OmniQuant [4], AffineQuant [5], and QuaRot [6])\\n\\n - We note that the proposed BoA is the integer weight optimization method, so we chose GPTQ as our baseline algorithm in our comparison. It should be noted that the proposed BoA can be used to enhance the performance of existing transformation methods. Indeed, our results in Table 5 demonstrate that the quantization performance can be boosted by combining BoA with existing transformation methods such as SmoothQuant and Z-Fold.\\n\\n - As suggested by the reviewer, we have newly measured the performance of BoA combined with the recent state-of-the-art transformation method QuaRot [6], which is included in the LLMC toolkit [7] (see Table I below). We observe that both BoA and GPTQ perform better when QuaRot is applied. As evident, the proposed BoA uniformly performs better than GPTQ. In particular, when QuaRot has been applied, BoA outperforms GPTQ by a significant margin (9% improvement in the zero-shot accuracy).\\n\\n - We note that we have excluded the comparison with QuIP because QuIP requires additional inference time and memory costs in the real inference stage. Specifically, in QuIP, the weight matrix $\\\\mathbf{W}$ is multiplied by random orthogonal matrices to suppress outliers within weights (i.e., $\\\\mathbf{W} \\\\leftarrow \\\\mathbf{U} \\\\mathbf{W} \\\\mathbf{V} ^{T}$ where $\\\\mathbf{U}$ and $\\\\mathbf{V}$ are random orthogonal matrices; see line 5 in [2, Algorithm 1]). While this technique (called incoherent processing) can suppress outliers, additional post-processing is needed to recover quantized weights ($\\\\widehat{\\\\mathbf{W}} \\\\leftarrow \\\\mathbf{U} ^{T} \\\\widehat{\\\\mathbf{W}} \\\\mathbf{V}$; see line 3 in [2, Algorithm 2]). Such post-processing should be done in the real inference stage, thereby incurring additional inference time and memory costs for storing orthogonal matrices $\\\\mathbf{U}$ and $\\\\mathbf{V}$. To accelerate the post-processing, one can utilize some special hardware or develop dedicated kernels, but unlike server-grade GPUs (e.g. NVIDIA A100), on-device NPUs (e.g. Qualcomm Hexagon) lack support for such additional processing, and customizing kernels for desired functionalities is very challenging.\\n\\n<Table I. INT2 quantization performance of BoA and GPTQ on LLaMA3-8B>\\n\\n(a) Perplexity ($\\\\downarrow$)\\n|Transformation|Method|Wiki2|C4|\\n|-|-|-|-|\\n|None|GPTQ|76.77|54.50|\\n||**BoA**|**71.75**|**46.04**|\\n|QuaRot|GPTQ|40.30|51.92|\\n||**BoA**|**23.50**|**31.47**|\\n\\n(b) Zero-shot accuracy ($\\\\uparrow$)\\n|Transformation|Method|ARC-c|ARC-e|HellaSwag|Average|\\n|-|-|-|-|-|-|\\n|None|GPTQ|20.65|32.66|44.00|32.44|\\n||**BoA**|22.70|35.73|47.37|**35.27**|\\n|QuaRot|GPTQ|24.15|40.24|42.96|35.78|\\n||**BoA**|29.86|51.68|51.85|**44.46**|\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer gH6w,\\n\\nWe are very glad to hear that your concerns have been addressed!\\n\\nWe sincerely appreciate your recognition of our efforts and the time you dedicated to reviewing our paper.\\n\\nRespectfully,\\n\\nAuthors of Paper 3463\"}", "{\"comment\": \"We appreciate the reviewer's comments.\\n\\n - As suggested, we have compared the group-wise quantization performance of the proposed BoA, OmniQuant [1], and AutoRound [2] (see Table I below). \\n - For OmniQuant, we have summarized the results reported in the original paper [Table 1, 1].\\n - For AutoRound, we have run the official code provided by the authors and reported the obtained results.\\n\\n - **Unstable and unsatisfactory performance of AutoRound**\\n - AutoRound suffers from an unstable training process due to the gradient approximation involved in the quantization parameter learning (see 'NaN' in [Table 14, 2]), which is similar to OmniQuant (see 'NaN' in Table 6).\\n - The 2-bit quantization performance of AutoRound collapses (perplexity is larger than $10^{3}$), as reported in the original paper [Section 6, 2].\\n\\n - **Performance comparison**\\n - As evident, regardless of whether group-wise quantization is applied, the proposed BoA performs better than OmniQuant and AutoRound.\\n - In particular, the proposed BoA outperforms OmniQuant and AutoRound by a significant margin for the 2-bit quantization.\\n\\nWe believe that these results are sufficient to conclude that the proposed BoA exhibits competitive performance for group-wise quantization as well as standard per-channel quantization. We hope the reviewer finds the above satisfactory. If you have any further concerns, please let us know.\\n\\n<Table I. Group-wise quantization performance (perplexity ($\\\\downarrow$) for WikiText-2) of the proposed BoA, OmniQuant, and AutoRound. 'GN' means that quantization has been applied to groups of N consecutive weights.>\\n\\n|Precision|Method|LLaMA2-7B|LLaMA2-13B|\\n|-|-|-|-|\\n|W2G128|OmniQuant|11.06|8.26|\\n||AutoRound|19.51|7.91|\\n||**BoA**|**9.78**|**7.81**|\\n|W2G64|OmniQuant|9.62|7.56|\\n||AutoRound|17.70|7.51|\\n||**BoA**|**8.99**|**7.39**|\\n|W2|OmniQuant|37.37|17.21|\\n||AutoRound|1.0e4|2.6e3|\\n||**BoA**|**11.04**|**8.94**|\\n\\n<List of references>\\n\\n[1] W. Shao et. al., \\\"OmniQuant: Omnidirectionally calibrated quantization for large language models,\\\" ICLR 2024.\\n\\n[2] W. Cheng et. al., \\\"Optimize weight rounding via signed gradient descent for the quantization of LLMs,\\\" EMNLP 2024.\"}", "{\"comment\": \"We appreciate the reviewer's valuable comments and constructive suggestions on our work.\\nOur point-to-point response is as follows.\\nPlease refer to the end of our final response for the list of references.\\n\\n**1. Limited novelty**\\n\\n - We appreciate the reviewer's comment. As the reviewers RmBt and kfPf acknowledged, we believe that our contribution is innovative in the sense that the proposed BoA is the first quantization method that attempts to capture inter-layer dependencies **without backpropagation**.\\n\\n - While it is well-known that capturing inter-layer dependencies is beneficial for quantization, all the existing works rely on time-consuming gradient-based optimization [1], [2], [3], which would not be suitable for real-world deployment where models to be deployed are frequently updated and multiple times of hyper-parameter searches are needed. Indeed, the first PTQ method that attempts to capture inter-layer dependencies (called BRECQ [1]) needs more than 10 hours even for relatively small-sized models (e.g., OPT-1.3B), and requires multiple GPU resources to quantize LLMs having more than 7B parameters.\\n\\n - Recently, OmniQuant [2] and AffineQuant [3] accelerated the quantization processing time by learning only a small number of quantization parameters (scale and zero-point) and certain parameters related to the model transformation. However, they suffer from an unstable quantization process due to the gradient approximation involved in the quantization parameter learning and sacrifice the low-bit performance because they apply the naive nearest-rounding when assigning integer weights (see Table 6). Furthermore, although OmniQuant and AffineQuant do not optimize integer weights, their processing time is still longer (e.g., 4 times longer for AffineQuant) than that required by the proposed BoA (see Table 13(a)).\\n\\n - To avoid the aforementioned disadvantages, we established the attention-aware Hessians, which is the first work to consider inter-layer dependencies while circumventing gradient-based optimization. We emphasize that existing Hessian-based PTQ methods, such as GPTQ, cannot capture inter-layer dependencies, which results in significantly worse performance than the proposed method (see Table I for comparison on recent LLMs such as LLaMA2 and LLaMA3). Moreover, we presented several relaxation techniques, without which multiple GPU resources are required and the quantization cannot be done in a reasonable processing time; for more details, please refer to our response to the next comment. Due to these reasons, we believe that the contribution of this work is meaningful and valuable. We hope the reviewer's kind evaluation and acknowledgment on our effort to develop a practical quantization solution that captures inter-layer dependencies.\\n\\n<Table I. Quantization performance of BoA and GPTQ on LLaMA2 and LLaMA3 models transformed via QuaRot. 'GN' means that quantization has been applied to groups of N consecutive weights.>\\n\\n(a) Perplexity ($\\\\downarrow$)\\n\\n|Model|Precision|Method|Wiki2|C4|\\n|-|-|-|-|-|\\n|LLaMA2-7B|W2|GPTQ|39.56|47.37|\\n|||**BoA**|**14.77**|**18.41**|\\n||W2G256|GPTQ|37.63|43.46|\\n|||**BoA**|**13.41**|**16.80**|\\n||W2G64|GPTQ|29.38|36.77|\\n|||**BoA**|**11.63**|**14.68**|\\n||W2G16|GPTQ|13.75|17.22|\\n|||**BoA**|**8.880**|**11.33**|\\n|LLaMA2-13B|W2|GPTQ|21.89|27.48|\\n|||**BoA**|**11.93**|**18.14**|\\n||W2G256|GPTQ|15.17|19.24|\\n|||**BoA**|**10.47**|**13.71**|\\n||W2G64|GPTQ|13.15|17.09|\\n|||**BoA**|**9.116**|**11.99**|\\n||W2G16|GPTQ|9.819|13.28|\\n|||**BoA**|**7.231**|**9.589**|\\n|LLaMA3-8B|W2|GPTQ|40.30|51.92|\\n|||**BoA**|**23.50**|**31.47**|\\n||W2G256|GPTQ|34.65|43.50|\\n|||**BoA**|**21.41**|**29.09**|\\n||W2G64|GPTQ|25.83|36.04|\\n|||**BoA**|**17.85**|**24.81**|\\n||W2G16|GPTQ|15.81|22.98|\\n|||**BoA**|**13.00**|**18.90**|\\n\\n(b) Zero-shot accuracy ($\\\\uparrow$)\\n\\n|Model|Precision|Method|ARC-c|ARC-e|HellaSwag|Average|\\n|-|-|-|-|-|-|-|\\n|LLaMA2-7B|W2|GPTQ|22.61|34.81|33.56|30.33|\\n|||**BoA**|30.89|55.05|51.22|**45.72**|\\n||W2G256|GPTQ|20.99|35.61|31.89|29.50|\\n|||**BoA**|29.61|55.01|52.72|**45.78**|\\n||W2G64|GPTQ|24.40|39.02|36.12|33.18|\\n|||**BoA**|33.02|59.85|55.68|**49.52**|\\n||W2G16|GPTQ|27.22|50.21|51.81|43.08|\\n|||**BoA**|36.43|63.47|63.13|**54.34**|\\n|LLaMA2-13B|W2|GPTQ|25.60|39.31|38.27|34.39|\\n|||**BoA**|31.31|58.38|53.07|**47.59**|\\n||W2G256|GPTQ|29.44|51.43|49.58|43.48|\\n|||**BoA**|35.67|62.04|59.32|**52.34**|\\n||W2G64|GPTQ|29.78|50.55|50.59|43.64|\\n|||**BoA**|37.54|64.39|62.45|**54.79**|\\n||W2G16|GPTQ|35.41|60.10|59.73|51.75|\\n|||**BoA**|42.32|69.78|69.04|**60.38**|\\n|LLaMA3-8B|W2|GPTQ|24.15|40.24|42.96|35.78|\\n|||**BoA**|29.86|51.68|51.85|**44.46**|\\n||W2G256|GPTQ|26.19|45.37|42.57|38.04|\\n|||**BoA**|30.46|55.26|52.61|**46.11**|\\n||W2G64|GPTQ|29.01|49.37|45.94|41.44|\\n|||**BoA**|32.94|59.64|54.95|**49.18**|\\n||W2G16|GPTQ|34.39|61.91|58.29|51.53|\\n|||**BoA**|39.85|65.74|63.17|**56.25**|\"}", "{\"summary\": \"This paper presents a training-free post-training quantization method based on GPTQ. It introduces inter-layer interaction by calculating Hessian matrices using an attention module instead of a simple linear module in LLMs. Additionally, the paper proposes techniques to improve the efficiency of Hessian matrix calculations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Introducing inter-layer interaction in a training-free manner is innovative.\\n2. The paper is well-written.\", \"weaknesses\": \"1. The experimental setup is somewhat outdated. Additional experiments on newer models, such as LLama-2 and LLama-3, are needed.\\n2. Although the paper introduces a training-free PTQ method, it may be slower than training-based methods. For example, Table 2 shows that BOA takes 1 hour to quantize 2.7B models, while GPTQ quantizes larger 13B models in only 21 minutes. OmniQuant, a training-based method, requires only ~1.1 hours for 7B models. The paper should provide comprehensive comparisons of quantization times to demonstrate the proposed method's effectiveness.\\n3. The paper focuses on 2-bit per-channel quantization and mentions that \\\"group-wise parameters result in additional memory costs and processing time during inference.\\\" However, weight-only quantization aims to alleviate memory constraints during the decoding stage. Group-wise quantization introduces negligible overhead but significantly improves performance and is a common practice in existing inference engines. Therefore, the paper should include results for group-wise quantization.\", \"questions\": \"Please refer weaknesses for details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a post-training quantization method called BOA that incorporates inter-layer dependencies without relying on backpropagation. BOA leverages attention-aware Hessian matrices to capture dependencies within the attention module, a relatively rare approach in existing PTQ methods. Additionally, BOA demonstrates compatibility with techniques like SmoothQuant and Z-FOLD, allowing for further enhancements in quantization performance. However, despite these strengths, BOA does not show sufficient memory and processing time benefits compared to existing PTQ methods. The experiments are conducted on outdated models, and the comparison methods lack recent advancements. Adding more experiments with up-to-date models and techniques would strengthen the paper.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper introduces an innovative PTQ method that cleverly captures inter-layer dependencies within attention modules through attention-aware Hessian matrices while avoiding backpropagation overhead.\\n2.\\tBOA is compatible with other techniques, such as SmoothQuant and Z-FOLD, enabling further improvements in quantization accuracy by integrating different quantization strategies.\", \"weaknesses\": \"1.\\tThe experiments are primarily conducted on BLOOM, LLaMA1, and OPT models, which are somewhat outdated compared to current state-of-the-art models. The paper lacks validation on more recent models, such as the LLaMA3 series.\\n2.\\tAlthough the paper introduces various techniques to reduce computational overhead and claims to use a Hessian-based strategy to avoid time-consuming gradient-based optimization, as shown in Table 13, BOA\\u2019s actual overhead in terms of memory and processing time is greater than GPTQ. Additionally, in Tables 3, 4, and 5, even under 2-bit quantization, BOA's improvement over GPTQ is marginal. For Table 6, it\\u2019s worth noting that GPTQ can also integrate certain quantization algorithms, like QuaRot [1] and SpinQuant [2], to achieve better results. Including comparisons with these methods is recommended.\\t\\n\\n[1] Ashkboos S, Mohtashami A, Croci M L, et al. Quarot: Outlier-free 4-bit inference in rotated LLMs. arXiv preprint arXiv:2404.00456, 2024.\\n[2] Liu Z, Zhao C, Fedorov I, et al. SpinQuant\\u2014LLM quantization with learned rotations. arXiv preprint arXiv:2405.16406, 2024.\", \"questions\": \"1.How does the performance of BOA compare when tested on more advanced models, such as the LLaMA3 series, instead of the relatively outdated models used in the paper?\\n2.How does BOA's accuracy compare to more recent quantization methods, such as QuaRot and SpinQuant?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer XkiM\\n\\nThanks for your time you dedicated to reviewing our paper!\\n\\nYou were concerned about our paper's marginal improvement, further validation on recent methods, and limited contributions.\", \"we_think_our_main_rebuttal_addresses_these_concerns_due_to_the_following_reasons\": [\"We have provided integration results with recent transformation-based method (named QuaRot), which is included in the LLMC toolkit that the reviewer suggested, and group-wise quantization results. Our results demonstrate that for almost all quantization configurations, **the proposed BoA achieves at least 8% improvement** over GPTQ in the zero-shot accuracy performance (see Tables I and II in the main rebuttal), which we believe is a significant advancement. In particular, the 2-bit quantized \\\"LLaMA2-7B\\\" model obtained by BoA even performs better than the \\\"LLaMA2-13B\\\" model quantized with GPTQ, even applied with group-wise quantization parameters (see the performance of W2G256 LLaMA2-13B obtained by GPTQ).\", \"We have emphasized that the proposed method is the **first to capture inter-layer dependencies without backpropagation**. Existing methods that attempt to capture inter-layer dependencies rely on time-intensive, gradient-based optimization, which results in much longer quantization processing time. For example, **AffineQuant needs 18.41 hours and 44.25 hours for quantizing 13B and 30B models, respectively, while the proposed method can finish quantization in 5 hours and 11 hours, respectively.**\", \"If you have any further concerns, please let us know. If not, we would be very grateful if you were to consider increasing your score.\"]}", "{\"comment\": \"**2. The authors introduce optimization approaches like Hessian relaxation and efficient computation of inverse Hessians, but the results did not show the effect of these optimization methods.**\\n\\n - We appreciate the reviewer's constructive suggestion. To say conclusion first, without each component developed to simplify the quantization process, the proposed method cannot finish quantization in a reasonable time with a single GPU.\\n\\n - Without the proposed relaxation on Hessians, we need to compute and store the Jacobian matrix $\\\\mathbf{J}_{\\\\sigma}$ for the softmax function (see Eqs. (8) and (11)). Because the shape of $\\\\mathbf{J} _{\\\\sigma}$ is $H \\\\times L \\\\times L \\\\times L$ where $H$ is the number of attention heads and $L$ is the input sequence length, storing $\\\\mathbf{J} _{\\\\sigma}$ requires more than 400 GB memory even for OPT-125M ($H=12$ and $L=2048$), which is not possible with a single A100 GPU of 80 GB memory.\\n\\n - Without the proposed efficient computation of inverse Hessians, we need to compute the inverse matrix of $\\\\mathbf{H} = \\\\mathbf{H} _{\\\\text{col}} \\\\otimes \\\\mathbf{H} _{\\\\text{row}}$ for each attention head where the shapes of $\\\\mathbf{H} _{\\\\text{col}}$ and $\\\\mathbf{H} _{\\\\text{row}}$ are $d \\\\times d$ and $d _{h} \\\\times d _{h}$, respectively ($d _{h}$ is the head dimension and $d = Hd _{h}$). Before computing the inverse Hessian, the Kronecker product of $\\\\mathbf{H} _{\\\\text{col}}$ and $\\\\mathbf{H} _{\\\\text{row}}$ ($\\\\mathbf{H} _{\\\\text{col}} \\\\otimes \\\\mathbf{H} _{\\\\text{row}}$) needs to be computed and stored. In other words, we need to save a $d _{h}d \\\\times d _{h}d$ matrix for each attention head, which requires more than 100 GB memory even for OPT-125M. Obviously, this is not possible with a single A100 GPU.\\n\\n - By assuming the independence between different attention heads, we could quantize rows belonging to different attention heads simultaneously. Without such simultaneous quantization of different heads, all rows need to be quantized sequentially (respectively), which cannot properly utilize the massive compute capabilities of modern GPUs. Indeed, we check that sequential quantization of all rows results in a significantly longer (at least 10 times longer) processing time than that required by the proposed simultaneous quantization (see Table 2 in the main text).\\n\\n - In the final version, we will discuss these points to elucidate the benefits of each proposed component.\\n\\n<List of references>\\n\\n[1] Y. Li et. al., \\\"BRECQ: Pushing the limit of post-training quantization by block reconstruction,\\\" ICLR 2021.\\n\\n[2] W. Shao et. al., \\\"OmniQuant: Omnidirectionally calibrated quantization for large language models,\\\" ICLR 2024.\\n\\n[3] Y. Ma et. al., \\\"AffineQuant: Affine transformation quantization for large language models,\\\" ICLR 2024.\"}", "{\"comment\": \"Dear Reviewer kfPf\\n\\nThanks for your time you dedicated to reviewing our paper!\\n\\nYou were concerned that we had not provided experimental results on recent models, group-wise quantization results, and comparisons of quantization processing times.\", \"we_think_our_main_rebuttal_addresses_these_concerns_due_to_the_following_reasons\": [\"We have provided quantization results on recent LLaMA2 and LLaMA3 models and group-wise quantization results. Our results demonstrate that for almost all quantization configurations, the proposed BoA achieves **at least 8% improvement over GPTQ** in the zero-shot accuracy performance (see Table I in the main rebuttal). In particular, the 2-bit quantized \\\"LLaMA2-7B\\\" model obtained by BoA even performs better than the \\\"LLaMA2-13B\\\" model quantized with GPTQ, even applied with group-wise quantization parameters (see the performance of W2G256 LLaMA2-13B obtained by GPTQ).\", \"We have provided comprehensive comparisons of quantization processing times. Overall, the proposed BoA performs significantly better than existing training-based approaches such as OmniQuant and AffineQuant (see Table 6), yet facilitates faster quantization (e.g., **4 times faster than AffineQuant**; see Table 13(a)).\", \"If you have any further concerns, please let us know. If not, we would be very grateful if you were to consider increasing your score.\"]}" ] }
0KHW6yXdiZ
An End-to-End Model For Logits Based Large Language Models Watermarking
[ "KA HIM WONG", "Jicheng Zhou", "Jiantao Zhou", "Yain-Whar Si" ]
The rise of large language models (LLMs) has increased concerns over source tracing and copyright protection for AI-generated content (AIGC), highlighting the need for advanced detection technologies. Passive detection methods usually face high false positives, while active watermarking techniques using logits or sampling manipulation offer more effective protection. Existing LLM watermarking methods, though effective on unaltered content, suffer significant performance drops when the text is modified and could introduce biases that degrade LLM performance in downstream tasks. These methods fail to achieve an optimal tradeoff between text quality and robustness, particularly due to the lack of end-to-end optimization of the encoder and decoder. In this paper, we introduce the first end-to-end logits perturbation method for watermarking LLM-generated text. By jointly optimizing the encoder and decoder, our approach achieves a better balance between quality and robustness. To address non-differentiable operations in the end-to-end training pipeline, we introduce an online prompting technique that leverages the on-the-fly LLM as a differentiable surrogate. Our method demonstrates superior detection robustness, consistently outperforming state-of-the-art (SOTA) methods by 1.2\%, 4.0\%, and 5.5\% across 3 LLMs, averaged over 6 types of text distortions. Simultaneously, our approach achieves exceptional text quality, as evidenced by reduced text perplexity and improved performance in the downstream tasks with a margin of 19.2\% and 3.03\%. Our method can be easily generalized to different LLMs. The code is available in supplementary material.
[ "LLM watermarking", "End-to-end optimization", "Robustness" ]
Reject
https://openreview.net/pdf?id=0KHW6yXdiZ
https://openreview.net/forum?id=0KHW6yXdiZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sPehp1v6MY", "s8IDRH3A52", "rfcjVl75xI", "onRpnazVFj", "mDBB3Cvodv", "hg7MHeyixh", "fAXqE1hAMN", "dFFlMIPbuF", "axYCh94BLw", "YITaKcQolU", "SBL1PfOCRI", "QhIBmPWtsD", "PzIWQs1Kus", "PjjklRtN2x", "N0w2XdM36q", "MoIJgIujzl", "L7vmIPhhc2", "L2ksxnxLMN", "KK8YaQk47k", "JmBXveRhpP", "HnoaucwSzL", "9xPw0TEOGh", "9t1hTmFjNi", "8DKBRplU8G", "6xKmCCtgt5", "5LjbjJvJuV", "2XnUtTgxyI", "1uJMH9qiJQ", "0s5nzq2Otf", "01dZ0mQHrU" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1732257431502, 1732616459954, 1732258219123, 1732620512964, 1732627017158, 1732788733364, 1732634774505, 1733290995491, 1732415235665, 1732589931990, 1730679408487, 1732617147093, 1732258564788, 1732669382143, 1732613816040, 1732513035879, 1732261126751, 1735172503095, 1732588644784, 1732937895793, 1737523999791, 1732590429156, 1732258954626, 1732687541871, 1733113770190, 1732544320242, 1730624458777, 1730072667282, 1732634356584, 1730734386047 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Reviewer_sSkQ" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Reviewer_JkAA" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Reviewer_QAhy" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Reviewer_sSkQ" ], [ "ICLR.cc/2025/Conference/Submission9685/Reviewer_JP2f" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Area_Chair_3LuW" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Authors" ], [ "ICLR.cc/2025/Conference/Submission9685/Reviewer_JP2f" ], [ "ICLR.cc/2025/Conference/Submission9685/Reviewer_JkAA" ], [ "ICLR.cc/2025/Conference/Submission9685/Reviewer_QAhy" ], [ "ICLR.cc/2025/Conference/Submission9685/Reviewer_sSkQ" ] ], "structured_content_str": [ "{\"title\": \"Response to All Reviewers\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely appreciate the time and effort the reviewers have dedicated to evaluating our manuscript. The concerns and feedback raised during the initial review have significantly contributed to enhancing the quality of our paper. Below, we summarize the key responses to the reviewers' suggestions and questions.\\n\\n### About the Robustness/Quality Trade-off\\n\\nThe main concern of our method is about the lowest log diversity in Fig. 6 (b). and the relatively large bias in Fig. 7 (c). \\n\\n1. **Superior Overall Performance** \\nWe argue that the importance of different quality metrics is not identical. As shown in our experiments, our model achieves superior downstream task performance despite having relatively lower log diversity and higher token bias. This is because multiple token candidates can fit within a given context, providing flexibility without sacrificing task performance. A detailed comparison of our method with SOTA competitors is presented in Table 5, accompanied by a radar graph in Fig. 9. These results demonstrate the clear advantages of our approach, including significant improvements in robustness, perplexity, and downstream task performance. This highlights our model's ability to achieve a well-balanced performance across key metrics.\\n\\n2. **User-Controllable Trade-offs** \\n Our model allows users to adjust the balance of different quality metrics by modifying the watermark strength \\u03b4 and the top-*k* logits tokens without retraining the model. To illustrate this, we conducted additional experiments presented in Table 5, where we adjust the value of *k* to 40 while keeping other model settings unchanged. The experiments reveal that increasing the top-*k* can enhance log diversity but could slightly increase perplexity. Similarly, as shown in Fig. 12, reducing \\u03b4 helps lower token bias, though this adjustment could compromise robustness.\\n\\n### About End-to-End Model Training\\n\\nThe main concerns regarding the end-to-end model training are differentiability and complexity. \\n\\n1. **Differentiability** \\n It is important to clarify that all prompts and generated text remain in the embedding domain throughout the process. In our proposed online prompting, the prompt is first converted into the embedding domain and then concatenated with X_wm/X_nwm. This ensures the entire process is differentiable, as we avoid the text-embedding transformation, which is the primary source of non-differentiability.\\n\\n2. **Training Resources and Hyperparameters** \\n Details about the training resources and hyperparameters are provided in Appendix F.\"}", "{\"comment\": \"Thank you for your thoughtful feedback and continued engagement with our work. We truly appreciate your insights and are pleased to address your comments in detail below:\\n\\n### 1. Superior Performance of Our Method \\nWe invite you to review our updated manuscript and the additional experimental results provided in ***Appendix A***. These results offer a comprehensive comparison with state-of-the-art (SOTA) competitors. Our model demonstrates significant improvements across multiple dimensions: a ***5.33% increase in robustness***, a ***9.76% reduction in perplexity (PPL)***, and a ***7.88% boost in downstream task performance***. \\n\\n### 2. Difference from Generation-Based Methods \\nOur end-to-end approach fundamentally differs from Abdelnabi et al. [C], particularly in the watermark embedding process. Abdelnabi et al. [C] embed watermarks post-generation, while our method embeds the watermark during text generation. A detailed discussion on the advantages of logit-based methods (including ours) over generation-based methods (Abdelnabi et al. [C]) is provided in Appendix H. \\n\\nOur contribution lies in proposing the ***first end-to-end model specifically designed for the logit-based watermarking scheme***. This approach integrates the entire LLM into the training pipeline, addressing non-differentiable modules in such a pipeline through our innovative online prompting and our model demonstrates superior performance over key metrics. \\n\\n### 3. Reproducibility \\nTo ensure reproducibility, we have included our model's code in the supplementary materials. Additionally, we provide extensive details to address your concerns, including further experimental results in Appendix A, training details in Appendix F, and attack configuration details in Appendix G. \\n\\nWe sincerely thank you for your valuable feedback and hope these clarifications address your concerns. Please do not hesitate to share any further questions or suggestions.\"}", "{\"comment\": [\"### W1: Transparency and Robustness/Quality Trade-off\", \"We understand your concerns regarding transparency. To address this, we have introduced robustness/quality scatter plots based on watermark strength in Fig. 8. These plots demonstrate that our model achieves the best robustness/quality trade-off overall.\", \"We strictly follow the attack settings established by prior works [A] and [B]. For added clarity, we have included a table in Appendix G that provides descriptions, specific prompts, and hyperparameter settings for each attack presented in Fig. 5.\", \"### W2: Related Work of Generation-based Methods\", \"We have included a citation to [C] and added a discussion on the advantages of logits-based watermarking compared to generation-based methods in Appendix H.\", \"### W3: Token distribution and undetectability\", \"For the explanation of token distribution, please refer to the \\\"Response to All Reviewers\\\". There is indeed an inherent trade-off between robustness and token bias. As shown in Fig. 12, reducing the watermark strength helps lower token bias, though this adjustment could compromise robustness.\", \"We present detection accuracy against training size for classifying paired (using the same prompt) watermarked/non-watermarked samples in Appendix E. The results demonstrate that existing methods (including our method) can be detected by an external classifier with paired samples when the training size is sufficient. However, we argue that such paired samples are unlikely to be accessible to adversaries, as the adversaries can only request watermarked text from the watermarked LLM API and search for unwatermarked text elsewhere. However, if watermarked/non-watermarked samples are obtained using identical prompts from different LLMs, a detector could exploit the domain gap between the two LLMs to achieve high accuracy with trivial solutions. Thus, we still conduct the paired samples experiment for reference.\", \"### W4: Watermarked Samples and Training Pipeline\", \"We have added examples of watermarked and non-watermarked text samples in Appendix C to validate the quality of our watermarked sentences.\", \"The parameters of the online LLM are frozen during training so that the online LLM is not updated in training. Therefore, the use of Gumbel-Softmax does not degrade the quality of the LLM output.\", \"Since we involve the entire LLM in the training pipeline for backpropagating gradients, our method requires relatively large computational resources and time compared to existing training-based methods. We have included detailed information about the training process in Appendix F. Despite this, once the model is trained, we develop an efficient converter for cross-LLM inference, ensuring that the computational cost during inference remains low.\", \"### Q1: Detectability Experiment\", \"We follow the undetectability experiment setup from [A], where the prompts for watermarked and non-watermarked texts differ.\", \"Refer to \\u2018W3: Token distribution and undetectability\\u2019\", \"### Q2: Embedding Domain and Semantic Similarity\", \"The $L_{\\\\text{sem}}$ term is not a novel contribution but a widely used method to compute semantic similarity between two sentences [D].\", \"At the beginning of the generation, the strong alignment between $X_{\\\\text{wm}}$ and $X_{\\\\text{nwm}}$ provides solid supervision. As the sequences diverge, gradients flowing back from $L_{\\\\text{sem}}$ at every step still help the watermark encoder learn to minimize semantic differences.\", \"We argue that computing the $L_{\\\\text{sem}}$ term across a batch of samples remains effective, as shown in the ablation study in Table 4. Removing $L_{\\\\text{sem}}$ leads to a significant increase in perplexity.\", \"### Q3: Gumbel-Softmax and Online Text Editor\", \"The Gumbel-Softmax is employed in the online text editor $N$ to introduce randomness. $N$ is activated with a probability of 0.5, ensuring that the decoder also receives non-edited watermarked text $X_{\\\\text{wm}}$.\", \"[A] Leyi Pan, Aiwei Liu, Zhiwei He, Zitian Gao, Xuandong Zhao, Yijian Lu, Binglin Zhou, Shuliang Liu, Xuming Hu, Lijie Wen, et al. Markllm: An open-source toolkit for llm watermarking. arXiv preprint arXiv:2405.10051, 2024\", \"[B] Aiwei Liu, Leyi Pan, Xuming Hu, Shiao Meng, and Lijie Wen. A semantic invariant robust watermark for large language models. In Proc. Int. Conf. Learn. Representat., 2024.\", \"[C] Abdelnabi, Sahar, and Mario Fritz. \\\"Adversarial watermarking transformer: Towards tracing text provenance with data hiding.\\\" 2021 IEEE Symposium on Security and Privacy (SP). IEEE, 2021.\", \"[D] Sachin Chanchani and Ruihong Huang. Composition-contrastive learning for sentence embeddings. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, 2023.\"]}", "{\"comment\": \"Dear Authors,\\n\\n1. Could you please explain the difference between Appendix A and the results in Figures 5 and 6? It appears that they are inconsistent. Are these results statistically significant? \\n\\n2. I agree that your approach differs from AWT.\", \"a_minor_remark\": \"The name \\\"generation-based\\\" in Appendix H for methods like AWT is confusing, as the watermarking happens after generation, as you correctly describe.\\n\\n3. Good, I am happy to see that. \\n\\n4. As I stated in my original review, the converter is a good idea. \\n\\nThe main problems I have are that the improvements do not appear to be statistically significant **and that the method incurs a higher computational cost by design. Thus, its usefulness for future work might be limited. I am willing to raise my score if the authors provide convincing arguments about why they believe that is not the case.\"}", "{\"comment\": \"Thank you for your positive feedback and continued engagement with our manuscript. We greatly appreciate your insights and have carefully addressed your comments below:\\n\\n### 1. **Difference Between Appendix A and Fig. 5 & 6** \\nIn the initial version of our manuscript, we evaluated our method using the default settings ($\\\\delta = 1.25, k = 20$). Thus, the results shown in Fig. 5 and 6. reflect the performance of our default model. \\n\\nThanks for your comments regarding robustness/quality trade-offs, we reorganized the results into Table 5 and Fig. 9 to provide a clearer comparison by presenting robustness and quality metrics in a unified table/graph. Notably, the results of our default model remain identical to those in Fig. 5 and 6. \\n\\nAdditionally, we introduced an alternative model configuration ($\\\\delta = 1.25, k = 40$) to demonstrate the flexibility of our approach in adjusting quality metrics to align with user preferences. For greater transparency, we also included robustness/quality scatter plots based on watermark strength in Fig. 8. \\n\\n### 2. **Clarification of the Term \\\"Generation-Based\\\"** \\nWe acknowledge that the term \\\"generation-based\\\" may be confusing, as the watermarking process occurs post-generation. In the updated manuscript, we have revised this terminology to \\\"post-generation methods\\\" for greater clarity in Appendix H.\\n\\n### 3. **Statistically Significant Improvements in Our Method** \\nWe understand the importance of verifying the statistical significance of our model's improvements. As demonstrated in Appendix A and Fig. 5 & 6, our method ***consistently*** outperforms SOTA competitors in averaged F1 scores across three LLMs. Additionally, it achieves superior performance in translation and code generation tasks, along with lower PPL, highlighting the effectiveness of our approach. \\n\\nTo further substantiate these results, we are conducting robustness and quality experiments with a paired t-test to validate the statistical significance of our improvements. Since these experiments are time-intensive, we will share the results as soon as they are completed. \\n\\n### 4. **Justification of Higher Computational Cost** \\nWhile our method incurs higher computational costs compared to existing KGW-based methods, we argue that these costs are reasonable. As detailed in Appendix F, we trained our end-to-end model only using ***one single NVIDIA RTX A6000 (48GB) GPU***. For scenarios with limited GPU memory, the batch size and maximum generated tokens can be reduced, or a smaller LLM, such as OPT-125M, can be utilized. \\n\\nImportantly, our model is trained offline, meaning the training process does not need to occur in real-time and can be executed on the server end. Once trained, the converter is deployed for efficient inference. \\n\\nWe hope these clarifications address your concerns, and we are happy to provide additional details or respond to further questions. Thank you again for your valuable feedback!\"}", "{\"title\": \"Thank You for Your Review and Feedback\", \"comment\": \"We want to express our sincere gratitude for your thoughtful and constructive feedback on our paper. We deeply appreciate the time and effort you have dedicated to reviewing our submission, and we are especially grateful for the increased score you have awarded us.\"}", "{\"title\": \"Keep my rating\", \"comment\": \"Thanks for the rebuttal. My main concerns about the motivation and the contribution are not addressed. I decide to keep my rating.\\n\\nDistortion-free watermarks have a theoretical guarantee to preserve the text quality, the statement \\\"even distortion-free watermarking can degrade the quality of output text\\\" is not true. The degrade of the text quality observed in the related works could be caused by the different experimental settings. In Hu et al [1], the distortion-free watermark has the same generation quality of the output text as the original LM. Thus, the trade-off between the quality and detectability does not exist. \\n\\nHu et al. Unbiased watermark for large language models, ICLR 2024\"}", "{\"title\": \"Comprehensive and Final Summary on Submission9685\", \"comment\": \"Dear Area Chairs,\\n\\nWe sincerely appreciate the time, effort, and valuable insights provided during the review process. It is encouraging to note that **three out of four reviewers have given positive scores** to our submission, recognizing the significance of our work. The constructive feedback has been instrumental in enhancing the quality and clarity of our paper, and we are grateful for the opportunity to improve our work. \\n\\nWe are pleased to report that after addressing their concerns, Reviewers **sSkQ**, **QAhy**, and **JP2f** have increased their scores. However, with the deadline approaching, we would like to kindly note that if Reviewer **JkAA** raises any further questions or concerns regarding our recent responses, we do not have the opportunity to provide additional clarification. \\n\\n---\\n\\n### Positive Aspects Highlighted by Reviewers \\n\\n#### **1. Novelty and Contribution** \\n**All reviewers** acknowledge the novelty of our end-to-end model, which adapts against potential attacks during optimization, thereby enhancing robustness. Reviewers **sSkQ** and **QAhy** further highlight the innovative \\\"convert\\\" module that enables cross-LLM inference, improving model transferability. \\n\\n#### **2. Comprehensive Experimental Validation** \\nReviewers **sSkQ**, **QAhy**, and **JkAA** mention the thoroughness of our experiments, which comprehensively evaluate key aspects such as robustness, quality, undetectability, and efficiency. Reviewer **JP2f** specifically appreciates the additional experimental results in Appendices, which provide a detailed comparison of the trade-offs between robustness and text quality. \\n\\n#### **3. Presentation and Clarity** \\nReviewers **sSkQ** and **JkAA** praise the well-organized and accessible structure of our paper.\\n\\n---\\n\\n### Concerns Raised by Reviewers and Our Responses \\n\\nWe have summarized the main concerns raised by reviewers in our \\\"Response to All Reviewers.\\\" In particular, we would like to address the feedback from Reviewer **JkAA**, who expressed skepticism regarding the trade-off between quality and robustness in LLM watermarking. Reviewer **JkAA** argues that distortion-free methods eliminate this trade-off, rendering our claims about achieving a better quality/robustness balance invalid. \\n\\nWe respectfully disagree with this perspective and have provided detailed responses in our latest reply to Reviewer JkAA. Below is a summary of our reasoning: \\n\\n1. **Distortion-Free Methods Are Not Robust:** \\n - No existing distortion-free watermarking scheme achieves robustness comparable to logits-based methods like Unigram. \\n - Our additional experiments demonstrate that distortion-free methods, such as Unbiased, fall short in detecting watermarked text after paraphrasing. This weak robustness has also been corroborated by recent benchmarking studies, including *MarkLLM* and *Mark my words*. \\n\\n2. **Compromises in Efficiency, Accessibility, and Adaptability:** \\n - Distortion-free methods sacrifice efficiency (e.g., our method is **16,000 times faster than EXP-Edit** and **680 times faster than Unbiased** in detection time). \\n - Accessibility is limited, **Unbiased require access to token logits from the LLM API and the prompts**, which may not always be feasible. \\n - Adaptability is hindered, as distortion-free methods are **incompatible with beam search and low-entropy scenarios**. \\n\\nOur proposed method does not face these limitations, demonstrating superior efficiency, accessibility, and adaptability while maintaining robustness and quality. \\n\\n---\\n\\nWe deeply value the reviewers' thoughtful feedback, which has strengthened our work. We hope this response clarifies our position and addresses any lingering concerns. \\n\\nThank you for your continued support. \\n\\nSincerely, \\nSubmission9685 Authors\"}", "{\"title\": \"Request for Further Feedback\", \"comment\": \"Dear Reviewers,\\n\\nWe hope this message finds you well. We are writing to follow up on the responses we provided to your valuable comments on our submission. We have already addressed all the points raised.\\n\\nAs the discussion deadline is approaching in a few days, we kindly request your further feedback on our responses at your earliest convenience. Your insights are crucial for us to improve our work. Thank you very much for your time and consideration.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer QAhy,\\n\\nWe apologize for reaching out again, but with the rebuttal discussion period concluding in **less than 36 hours**, ***Nov 26, 11:59 PM AoE***, we sincerely appreciate the continued time and effort you dedicate to discussing our submission.\\n\\nTo demonstrate the flexibility of our model in adjusting quality metrics trade-offs, Table 5 shows that setting \\\\( k \\\\) to 40 enhances log diversity with a slight increase in perplexity. The ROC-AUC curves in Appendix B show that our model effectively manages false positive rates through threshold adjustments. Fig. 11 illustrates the watermark encoder's output values for each token in a sentence, identifying whether tokens fall into green or red lists, thereby enhancing the interpretability of our method. Detailed training information is provided in Appendix F, and all concerns about OOD issues in prompts and LLMs have been addressed.\\n\\nWe are truly grateful if you could let us know if there are any remaining questions or concerns about our submission. If our responses satisfactorily address your concerns, we kindly hope you might consider reflecting this in your review score.\\n\\nThank you once again for your thoughtful feedback and for your consideration.\\n\\nBest regards, \\nSubmission9685 Authors\", \"title\": \"Request for Timely Feedback on our Clarification of Quality Trade-offs and OOD Concerns\"}", "{\"summary\": \"In this paper, the authors present a novel logit-based watermarking pipeline for text generation. Their approach incorporates Gumbel-Softmax for sampling and an online prompting technique for adversarial edits, allowing the encoder and decoder to be trained in an end-to-end fashion. The method achieves state-of-the-art detectability under various attacks while maintaining high-quality generated text.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper includes an extensive section on experiments, including many state-of-the-art methods and attack scenarios.\", \"The results for overall detectability and text quality look promising.\", \"The encoder and decoders are small, so although an extra watermark encoder and decoder have been introduced, the generation and detection are very efficient.\"], \"weaknesses\": [\"The result on generation diversity is not great as the proposed method has the lowest diversity among all other methods. Even though this doesn't affect the results on the benchmarks, I think this might be a bad feature for certain tasks, like synthetic data generation.\", \"The proposed method is training-based not like some of the baselines. The method might suffer OOD issues that the distribution of the prompt at the inference time is quite different from the training.\", \"The proposed method used a classifier for detection, and this does not give us an interpretable result like a p-value. This might also be bad if we want to control the false positive rate during detection.\"], \"questions\": [\"I wonder how expensive is the training especially the requirement for the GPU memory. If I understand it correctly, the forward looks like: first token logits -> encoder -> sample the first token -> second token logits -> encoder -> sample the second token -> ... So we recursively call the encoder for n times if we generate n tokens. Would the computational graph be huge? Especially you also have to sample some tokens from the online text editor later. I wonder how did you train it exactly.\", \"As I mentioned above, the method might suffer OOD issues. Would the encoder/decoder trained on WikiText-103 still be effective for other datasets?\", \"Another thing that confuses me is that: where do you show the results for cross-llm inference? I noticed you mentioned: \\\"To train our end-to-end model, we chose OPT-1.3B as the online LLM for efficiency.\\\" Does this mean results for llama models are the transfer inference results? Or this is for the online text editor.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### 4. Computational Resources\\nWhile our model requires additional resources during the training phase (details provided in Appendix F), we have developed a converter to enable ***efficient and cross-LLM*** inference. As shown in Table 3, the time and memory usage for watermark detection remains negligible. Additionally, the time overhead for watermark embedding can be mitigated through parallel tokenization, reducing the time complexity by up to $1/k$. Although this optimization is beyond the scope of our current work, it can be explored further in future research.\"}", "{\"comment\": [\"### W1: Robustness/Quality Trade-off\", \"For the explanation of the log diversity, please refer to the \\\"Response to All Reviewers.\\\"\", \"### W2: OOD\", \"We have fully addressed the OOD problem in our evaluation. The prompts used in the training and testing phases come from different datasets. For training the watermark encoder/decoder, we use the WikiText-103 dataset, which contains articles from Wikipedia. For evaluation, we use the RealNewsLike subset of the C4 dataset, which contains news articles that differ significantly in content from the training set. The results in Fig. 5 demonstrate that our method is robust to OOD scenarios.\", \"### W3: Detection and Interpretability\", \"We use a deep classifier for detection and provide flexibility in controlling the false positive rate by adjusting the threshold for the decoder\\u2019s output logits. The ROC-AUC curve in Appendix B illustrates our model\\u2019s ability to effectively manage false positive rates through the threshold.\", \"Additionally, for any given sentence, we can retrieve the logits perturbation (output of the watermark encoder) for each token as shown in Fig. 11, allowing us to determine whether tokens fall into green/red lists. This capability enables the interpretability of the watermark signal, similar to the method used in KGW.\", \"### Q1: Watermark Embedding and Model Training\", \"The watermark embedding is applied at each generation step during the training phase. Our end-to-end model was trained on a single NVIDIA RTX A6000 48GB GPU for 35k steps with approximately 5 days and the GPU memory usage is 21.96 GB. By adjusting the maximum number of generated tokens (set to 100 in our training), we can control the complexity of the computational graph. Further details of our model training are provided in Appendix F.\", \"### Q2: Addressing OOD\", \"Please refer to the response for 'W2: OOD'.\", \"### Q3: Cross-LLM Evaluation\", \"Our model is trained with OPT-1.3B and evaluated on Llama-2 and Llama-2-Chat for cross-LLM inference.\"]}", "{\"comment\": \"Thank you very much for your kind and thoughtful feedback. We are delighted to hear that our clarifications were helpful. We are grateful for your positive evaluation and the time you have taken to review our work.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your response. My main concern is that I am unclear on how the paper improves over related work. The abstract states that it is about the trade-off between text quality and robustness; however, when I look at Figure 5, the robustness of the proposed method is not significantly higher than that of other methods. Similarly, for Figure 6, the quality scores appear similar to those of all other approaches. Your method is trainable, which is interesting but has, in principle, been done before (even though, as you correctly point out, Abdelnabi et al. [C] focus on seq2seq watermarking). However, your approach increases complexity as it requires a lot of computational resources to train and lacks reproducibility, as the process is much more involved than any of the other methods you compare with, such as KGW. Could you please clarify this point for me? Thank you.\"}", "{\"title\": \"Raise my rating\", \"comment\": \"The additional experimental results (including SIR) in Appendix A, B, and D provide a more detailed comparison of the performance in terms of the trade-off between robustness and text quality, as well as token distribution bias, among non-training-based methods, existing training-based methods, and the authors\\u2019 method.\\n\\nBased on these efforts, I have raised my scores.\\n\\nBy the way, there are additional questions you can provide some response if available.\\n\\n1. Could you explain why the UPV's F1 Score under BERT-Large in Table 2 is 0?\\n\\n2. Have you evaluated the watermark performance under beam search?\"}", "{\"comment\": [\"### Weaknesses\", \"We have revised the related work section [A] and added a discussion in Appendix H, highlighting the advantages of logits-based watermarking compared to generation-based methods.\", \"Our network design prioritizes efficient watermark embedding and detection.\", \"While our experiments utilize SOTA open-source LLMs, we acknowledge the importance of evaluating our method on the latest architectures, which will be addressed in future work.\", \"Following the recent benchmark work [B], we present attack evaluation results in Fig. 4 and Fig. 9, with detailed analysis provided.\", \"The caption for Fig. 2 has been revised to enhance clarity.\", \"### Q1: Comparative Analysis\", \"We evaluate our model against training-based methods (UPV and SIR) in terms of effectiveness, robustness, and text quality. The results demonstrate how our model achieves a favorable trade-off between robustness and quality.\", \"### Q2: Efficiency and Generalization\", \"The inference overhead arises from tokenizing $k$ sequences per step. This can be mitigated through parallel tokenization, reducing time complexity by up to $1/k$.\", \"As shown in Fig. 4, the similarity in tokenization results across tokenizers enables our converter to perform well if the target sentences align with the training-phase tokenizer.\", \"Potential failure modes include handling sentences with unrecognized symbols or characters by the training-phase tokenizer.\", \"### Q3: Evaluation Scope\", \"While we have compared our method with several SOTA open-source LLMs, we recognize the need for broader testing across diverse architectures and scales. Due to time constraints, this will be explored in future research. We agree that this is a valuable area for further study.\", \"### Q4: Related Work and Claims\", \"Additional related works have been incorporated in Appendix H.\", \"The \\\"first end-to-end framework\\\" refers specifically to logits-based LLM watermarking, as stated in the abstract.\", \"### Q5: Security Analysis\", \"The watermark encoder should be protected as a private key (as in KGW), while the watermark decoder can be publicly accessible due to the neural network's black-box nature.\", \"Appendix E includes experiments on undetectability when attackers have access to paired watermarked/unwatermarked samples. Results show increased detectability compared to unpaired samples in Table 2.\", \"### Q6: Architecture Choices\", \"The LSTM structure mimics the KGW-based method, where green/red lists are derived from preceding tokens. The LSTM network captures temporal dependencies from preceding sequences.\", \"The lightweight architecture ensures efficient watermark embedding and detection. Without resource limitations, advanced architectures like Transformers could be employed in the watermark encoder/decoder.\", \"[A] Abdelnabi, Sahar, and Mario Fritz. \\\"Adversarial watermarking transformer: Towards tracing text provenance with data hiding.\\\" 2021 IEEE Symposium on Security and Privacy (SP), 2021\", \"[B] Leyi Pan, Aiwei Liu, Zhiwei He, Zitian Gao, Xuandong Zhao, Yijian Lu, Binglin Zhou, Shuliang Liu, Xuming Hu, Lijie Wen, et al. Markllm: An open-source toolkit for llm watermarking. arXiv preprint arXiv:2405.10051, 2024\"]}", "{\"metareview\": \"This submission is proposed a learned encoder-decoder approach for text watermarking that can be learned end to end. Learning the watermark scheme end to end in this way increases performance in a number of robustness charateristics by smaller amounts, but gives up the guarantees developed in previous work.\\n\\nOne point that several reviewers brought up, and I concur, is that the positioning of this paper as \\\"the first encoder-decoder/end-to-end\\\" learned watermark is surprising. The submission relates only to work past 2024 KGW-type analytical watermarks, missing the entire generation of watermark papers before that, based on learned encoder-decoder setups in various configurations, where Abdelnabi&Fritz, 2020 is only one of the more common examples (which even quite closely relates to the mechanics of this work, aside from still using a separate model for watermark encoding and generation, which is an older style appropriate for weaker LMs). Liu et al., 2024b and Huo et al., 2024 are a few other example brought up by the reviewers.\\n\\nSecondly, I think it is interesting to note that analytic watermarks forgo method of previous work to provide guarantees of various forms, such as provable p-values - which are highly valuable for high-stakes applications such as text forensics. This issue is also at the core of the inquiry of JkAA, who provides additional details on this discussion in relationship to analytical watermarks which explore the robustness trade-off targetted by the authors.\\n\\nOverall, support from positive reviewers is marginal, and I do consider these issues significant enough that I do not recommend acceptance for now. I do think this work has merit (especially around the transfer conversion part), and I hope the authors are going to revise their manuscript based on the feedback received.\", \"additional_comments_on_reviewer_discussion\": \"Aside from points raised above, the authors discuss a number of smaller points with reviewers, such as evaluation scope, statistical confidence in their results and model training details.\"}", "{\"title\": \"Request for Timely Feedback on our Clarification of Robustness/Quality Trade-offs and Training Details\", \"comment\": \"Dear Reviewer sSkQ,\\n\\nWe apologize for reaching out again, but with the rebuttal discussion period concluding in **less than 36 hours**, ***Nov 26, 11:59 PM AoE***, we sincerely appreciate the continued time and effort you dedicate to discussing our submission.\\n\\nTo ensure clarity regarding the robustness-quality trade-offs, we have provided additional explanations, including detailed comparisons in Appendix A that demonstrate the superiority of our model. Furthermore, we have included comprehensive training details in Appendix F and have addressed all the weaknesses and questions raised earlier.\\n\\nWe are truly grateful if you could let us know if there are any remaining questions or concerns about our submission. If our responses satisfactorily address your concerns, we kindly hope you might consider reflecting this in your review score.\\n\\nThank you once again for your thoughtful feedback and for your consideration.\\n\\nBest regards, \\nSubmission9685 Authors\"}", "{\"comment\": \"We would like to clarify that our primary claim focuses on the trade-off between ***quality and robustness***, rather than ***quality and detectability***. In our work, we define detectability as the detection accuracy on unaltered, watermarked text; while robustness refers to the detection accuracy on modified watermarked text. The modifications may include edits made by the user after receiving the watermarked text, such as synonym substitution, paraphrasing, or other changes, which are common in practice.\\n\\nAn ideal watermarking scheme would produce a watermark that is distortion-free (as defined in EXP-Edit[A] and Unbiased[B]) and also demonstrates resilience to potential text modifications, similar to the robustness of Unigram[C]. Unfortunately, to the best of our knowledge, ***no existing watermarking scheme meets both of these criteria simultaneously***.\\n\\nTherefore, the motivation behind our work is clear: we aim to ***achieve a better trade-off between quality and robustness by explicitly optimizing the quality and robustness objectives in an end-to-end manner***.\\n\\nIn addition to the quality/robustness trade-off, we would like to highlight several key factors of EXP-Edit and Unbiased that compromise, in order to achieve distortion-free watermark embedding, especially when compared to our method:\\n\\n1. **Detection Time Complexity** \\n We compare the average time required to detect a single watermarked sample using KGW, EXP-Edit, Unbiased, and our method. The experiment is conducted with Llama2-7B on a single NVIDIA A6000 GPU, with the following results:\\n\\n | Method | Required Time (seconds) |\\n |-------------|-------------------------|\\n | KGW | 0.3 |\\n | EXP-Edit | 80 |\\n | Unbiased | 3.4 |\\n | Ours | 0.005 |\\n\\n Our method is highly efficient, requiring no access to the LLM and benefiting from GPU parallel acceleration. In contrast, Unbiased requires additional access to the LLM and prompts, while EXP-Edit has significantly longer detection times. Our method is **16,000 times** faster than EXP-Edit and **680 times** faster than Unbiased, making it feasible for scalable watermarking systems.\\n\\n2. **Accessibility** \\n Unbiased requires access to the token logits of the LLM API and the prompt, which could reduce its accessibility. In contrast, our method, similar to KGW, only requires the text to be detected, making it simpler to deploy.\\n\\n3. **Choice of LLM Decoding Strategy** \\n Both EXP-Edit and Unbiased work by manipulating the sampling process, and thus do not function with beam search due to the deterministic nature. As shown in Appendix I, our method performs even better with beam search than with multinomial sampling. This is because beam search tends to select higher-probability tokens, which implicitly favors more of the green-list tokens in the generated text.\\n\\n4. **Low-Entropy Scenarios** \\n EXP-Edit and Unbiased are not effective in generative processes with low entropy, such as code generation, as empirically shown in the MarkLLM benchmark [E]. In contrast, our method demonstrates superior performance in code generation, as illustrated in Fig. 6 (c). Our method is logits-based and can be further enhanced with techniques like SWEET[D] to improve performance in low-entropy scenarios.\\n\\nWe hope this clarifies the motivation behind our work and emphasizes the practical advantages of our approach. Thank you for your valuable feedback.\\n\\n[A] Kuditipudi et al. Robust Distortion-free Watermarks for Language Models, TMLR 2024\\n\\n[B] Hu et al. Unbiased watermark for large language models, ICLR 2024\\n\\n[C] Zhao et al. Provable Robust Watermarking for AI-Generated Text, ICLR 2024\\n\\n[D] Lee et al. Who Wrote this Code? Watermarking for Code Generation, ACL 2024\\n\\n[E] Pan et al. MarkLLM: An Open-Source Toolkit for LLM Watermarking, EMNLP 2024\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Request for Timely Feedback on our Clarification of Motivation and Contributions\", \"comment\": \"Dear Reviewer JkAA,\\n\\nWe apologize for reaching out again, but with the rebuttal discussion period concluding in **less than 36 hours**, ***Nov 26, 11:59 PM AoE***, we sincerely appreciate the continued time and effort you dedicate to discussing our submission.\\n\\nWe justify our motivation with recent LLM watermark benchmarking studies, which confirm the existence of a robustness/quality trade-off in distortion-free watermarking methods. Our contributions are as follows: 1) addressing the robustness/quality trade-off through end-to-end training of the watermark encoder/decoder; 2) tackling the challenge of non-differentiable modules in the training pipeline with an innovative online prompting; and 3) demonstrating superior performance across key metrics compared to state-of-the-art competitors, as highlighted in Figure 9.\\n\\nWe are truly grateful if you could let us know if there are any remaining questions or concerns about our submission. If our responses satisfactorily address your concerns, we kindly hope you might consider reflecting this in your review score.\\n\\nThank you once again for your thoughtful feedback and for your consideration.\\n\\nBest regards, \\nSubmission9685 Authors\"}", "{\"comment\": \"### W1: Distortion-Free Watermarking\\n- We have indeed considered distortion-free watermarking in our manuscript and described them as sampling-based methods (EXP). However, we argue that even distortion-free watermarking can degrade the quality of output text. This is supported not only by our empirical results in Fig. 6 ***but also by the LLM watermark benchmarking studies [A], [B]***. Consequently, we emphasize that the robustness/quality trade-off remains a persistent challenge for existing LLM watermarking models and is still an open problem to be solved in the field.\\n\\n### W2: Contributions \\n- Given the persistence of the robustness/quality trade-off, our primary contribution is achieving a better trade-off through encoder/decoder end-to-end training. \\n- Additionally, we address the key challenge of incorporating non-differentiable modules into the end-to-end training pipeline by proposing an innovative online prompting approach.\\n- As highlighted in Fig. 9, our model achieves a better performance across key metrics against SOTA competitors.\\n\\n### W3: Robustness and quality Trade-offs \\n- Please refer to the \\\"Response to All Reviewers\\\". A detailed comparison of our method with SOTA competitors is presented in Table 5, accompanied by a radar graph in Fig. 9. These results demonstrate the clear advantages of our approach, including significant improvements in robustness, perplexity, and downstream task performance. This highlights our model's ability to achieve a well-balanced performance across key metrics.\\n\\n[A] Leyi Pan, Aiwei Liu, Zhiwei He, Zitian Gao, Xuandong Zhao, Yijian Lu, Binglin Zhou, Shuliang Liu, Xuming Hu, Lijie Wen, et al. Markllm: An open-source toolkit for llm watermarking. arXiv preprint arXiv:2405.10051, 2024\\n\\n[B] Piet, Julien, et al. \\\"Mark my words: Analyzing and evaluating language model watermarks.\\\" arXiv preprint arXiv:2312.00273, 2023.\"}", "{\"title\": \"Statistical Significance of Improvements\", \"comment\": \"We conduct additional robustness and quality experiments using a paired t-test to validate the statistical significance of our model's improvements. The results, presented in Appendix J, demonstrate that our model achieves statistically significant improvements in both robustness and quality compared to the competitors.\\n\\nWe appreciate your suggestion, as it helped us further substantiate the effectiveness of our method. Please do not hesitate to share any further questions or suggestions.\"}", "{\"comment\": \"1.**Robustness of sampling-based methods**\\n\\nTo further validate our claim regarding the quality/robustness trade-off, we conduct an experiment on Unbiased alongside logits-based methods, including KGW, Unigram, and our proposed approach. The evaluation involved subjecting the watermarked text to Dipper paraphrasing under two settings, with the results shown below:\\n\\n| Method | PP-Dipper (lex:60; order:0) | PP-Dipper (lex:60; order:20) |\\n|-----------|--------------------------|--------------------------|\\n| KGW | 0.878 | 0.792 |\\n| Unigram | 0.885 | 0.879 |\\n| Unbiased | 0.687 | 0.689 |\\n| Ours | **0.916** | **0.902** |\\n\\nThe results demonstrate that our method achieves the highest F1 score among the competitors, followed by the other logits-based methods, KGW and Unigram. In contrast, the distortion-free method, Unbiased, performs poorly in detecting watermarked text after paraphrasing, with low F1 scores of 0.687 and 0.689 in the two paraphrasing scenarios, respectively.\\n\\n2.**Influence of LLMs settings**\\n\\nWe argue that a feasible watermarking scheme should demonstrate adaptability to various LLMs and configurations, rather than being effective under only specific conditions. These configurations may include variations in LLM itself, temperature, decoding strategies, top-$p$/top-$k$ values, or other parameters. ***An effective watermarking scheme should preserve both the quality of the LLM's output and the detection accuracy across diverse settings***.\\n\\n***Our proposed method achieves this adaptability by leveraging the tunable watermark strength $\\\\delta$ and top-$k$ values***. As shown in Fig. 5 and 6 of our manuscript, our approach consistently performs well across different LLMs and settings. \\n\\nOur experiments, along with prior studies such as SWEET (referenced in Table 1), reveal that sampling-based methods like EXP-Edit (which Unbiased also follows using a similar inverse transform principle) struggle to achieve high detection scores while maintaining distortion-free outputs. This limitation arises from the fixed nature of the sampling-based watermark schemes and the spiky distribution of low-entropy outputs generated by code-generation LLMs, which reduces the effectiveness of sampling-based approaches. To enhance detectability in low-entropy scenarios, increasing the temperature is often necessary; however, this adjustment compromises the quality of the generated output.\"}", "{\"comment\": \"Thank you very much for your positive feedback and for raising your scores based on our additional experimental results. We appreciate your continued engagement with our work.\", \"regarding_your_additional_questions\": \"### **The F1 Score of UPV**\\nAn F1 score of 0 indicates that all watermarked samples were misclassified. Meanwhile, an accuracy of 0.480 suggests that some non-watermarked samples were also misclassified.\\n\\n### **Watermark Performance Under Beam Search**\\nWe use multinomial sampling as the LLM decoding strategy, and the detection results for beam search are further presented in Appendix I. Overall, our method continues to demonstrate superior performance and robustness.\"}", "{\"summary\": \"The authors present an end-to-end training-based text watermarking method aimed at achieving an optimal trade-off between text quality and robustness, leveraging the logits-based watermarking framework introduced by Kirchenbauer et al. Specially, they jointly train additional encoder to generate logits perturbation to shift the tokens\\u2019 probability distribution and additional decoder to extract the watermarking signals from the text. In addition, the authors introduce distortion module, address the non-differentiable operations in the end-to-end training pipeline, and consider the generalization to different LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"A distortion module is helpful to enhance the robustness.\"], \"weaknesses\": [\"Insufficient coverage of relevant related work\", \"Inadequate explanation of key methodological design choices\", \"Evaluation on outdated LLM architectures\", \"Limited adaptive attack evaluation\", \"Unclear figure captions (specifically Fig. 2)\"], \"questions\": [\"### **Comparative Analysis**\", \"The paper briefly mentions other training-based methods (UPV, SIR) but lacks detailed comparison\", \"Please provide in-depth analysis of architectural differences and performance variations between this work and existing training-based approaches\", \"### **Efficiency and Generalization**\", \"The cross-model inference time overhead is significant - what optimizations are possible?\", \"How does the method handle LLMs not included in the cross-model converter?\", \"What is the failure mode analysis?\", \"### **Evaluation Scope**\", \"Evaluation should include more recent LLMs (e.g., Yi, Qwen)\", \"Need broader testing across model architectures and scales\", \"### **Related Work and Claims**\", \"Notable omission of generation-based watermarking methods (e.g., AWT [1], REMARK-LLM [2])\", \"The \\\"first end-to-end framework\\\" claim requires more careful qualification\", \"[1] Adversarial watermarking transformer: Towards tracing text provenance with data hiding\", \"[2] REMARK-LLM: A robust and efficient watermarking framework for generative large language models\", \"### **Security Analysis**\", \"How does the method perform against adaptive attacks where adversaries have full access to the system?\", \"Need evaluation of undetectability and robustness when attackers can obtain paired watermarked/unwatermarked samples\", \"### **Architecture Choices**\", \"Please justify the selection of LSTM as the decoder backbone\", \"What alternatives were considered and why were they rejected?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed an end-to-end optimization framework for achieving better trade-off between the robustness and the text quality. The authors validate the effectiveness of the proposed framework with comprehensive experiments on popular LLMs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed end-to-end method is original, and extensive experiments have been conducted to evaluate its quality, detectability, and robustness.\\n\\n2. The presentation is well-structured, making the paper easy to follow.\", \"weaknesses\": \"1. Unclear motivation. The authors claimed \\u201cHowever, these existing approaches still fail to achieve an optimal trade-off between text quality and robustness\\u201d. However, the authors have missed an important line of works regarding the distortion-free watermark (Kuditipudi et al., 2024; Christ et al., 2024), which suggested we can embed watermarks into LLMs without affect the generation quality. Thus, the there is generally no trade-off between the text quality and robustness, and the claim in the paper is wrong.\\n\\n2. Limited contribution. Comparing to the previous works (Liu et al., 2024b; Huo et al., 2024), which also share an encoder-decoder structure for logits-based watermarking, the proposed method only introduce a jointly training network for achieving better trade-off between text quality and robustness. As the reviewer has pointed out in weaknesses 1, the trade-off generally does not exist. Thus, the contributions of the proposed method are unclear.\\n\\n3. The experimental results also cannot support the motivation of \\u201cachieving better trade-off between text quality and robustness\\u201d. In Figure 7. The KGW watermark has significantly better quality than the proposed watermark, although the detectability and the robustness of KGW are poor. In order to claiming the proposed method has achieved better trade-off than KGW, the authors should show the superior of the proposed method on all quality, detectability, and robustness axis. Besides, in Figure 5, we can also see that the proposed method does not always outperform the baselines in all scenarios.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"I sincerely appreciate the authors' detailed response. I recognize that there were some misunderstandings on my part, which the authors have effectively clarified. Especially, the model is only trained with a small OPT-1.3B model and applied to larger Llama models. I feel like such transferability makes the method easy to use for different models and makes it kind of \\\"training-free.\\\" Therefore, I raised my score to positive.\"}", "{\"summary\": \"The authors propose an end-to-end optimized watermarking method for large language models to enable the detection of AI-generated content. The goal is to enhance the robustness/text quality trade-off of current LLM watermarking methods. The challenge is that many operations, such as generating sequences of text, are not differentiable. The authors overcome this issue by using the well-known Gumbel-Softmax trick to backpropagate through the text-generating process. To enhance robustness, the authors incorporate a paraphrasing model during the optimization method, and they develop cross-LLM adapters to train on one LLM and deploy it to other LLMs. They show robustness against six text modification attacks and improved text quality.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The method works well and allows adapting against paraphrasing attacks during optimization.\", \"The authors thoroughly evaluate their approach by including experiments on robustness, detectability and impact on runtime during inference.\", \"The paper is clear in its presentation and presents the proposed ideas well.\", \"The cross-LLM inference adapter is a great idea, and I have not seen one before for trainable watermarking methods.\"], \"weaknesses\": \"- The results from Figure 5 in their current form are not reproducible and lack transparency. I believe it should be a scatter plot that includes the quality degradation, and the authors should state the hyperparameters for each approach used for paraphrasing (e.g., the prompt used for paraphrasing).\\n\\n- Abdelnabi et al. [A] have previously proposed end-to-end watermarking for LLMs. They also use the Gumbel-softmax trick to differentiate through the text generation process. The authors should consider citing this work.\\n\\n- Figure 7, showing the difference in token distribution for the top 50 tokens, is difficult to interpret. It looks like the distance to the non-watermarked text is quite large (especially compared to KGW). Also, the choice of using 400 non-watermarked/watermarked samples is unclear. I think it would be better to plot detection accuracy against the size of the training dataset. \\n\\n- It is well known that perplexity is an unreliable metric used to measure text quality [C]. I was surprised that the authors did not include watermarked samples in their Appendix. There is a known problem: training LLMs with Gumbel-softmax is unstable and can lead to poor results for text generation [D]. Could the authors please show watermarked samples and (potentially) include a limitation section on current challenges when using this optimization method? \\n\\n--------\\n[A] Abdelnabi, Sahar, and Mario Fritz. \\\"Adversarial watermarking transformer: Towards tracing text provenance with data hiding.\\\" 2021 IEEE Symposium on Security and Privacy (SP). IEEE, 2021.\\n\\n[C] Wang, Yequan, et al. \\\"Perplexity from plm is unreliable for evaluating text quality.\\\" arXiv preprint arXiv:2210.05892 (2022).\\n\\n[D] Yu, Zhang Ze, et al. \\\"Fine-tuning Language Models with Generative Adversarial Reward Modelling.\\\" arXiv preprint arXiv:2305.06176 (2023).\", \"questions\": [\"I do not understand why the prompt between non-watermarked and watermarked texts needs to differ (footnote 3 on page 9). Why can't the attacker re-use the same prompts when querying non-watermarked texts?\", \"In Figure 2, I am unclear how the authors calculate the distance $L_{sem}$ between the watermarked and non-watermarked texts $X_{wm}, X_{nwm}$. Since both sequences will differ in the sampled tokens, they will diverge throughout the generation process if sampled for many tokens. Then, calculating this semantic distance will be meaningless as you cannot effectively align $X_{wm}, X_{nwm}$. Also, it appears unreasonable that the averaged similarity over many contexts will be a meaningful measure of the overall similarity between two sequences. I would appreciate the authors elaborating on this point and providing more context\", \"The description of the online text editing module is a bit confusing to me. Do the authors also use Gumbel-softmax for the online text editor, or do they pass $X_wm$ directly to the decoder $D$ contrary to what is shown in Figure 1? Since the text generation process from the online text editor $N$ is not necessarily differentiable unless you use some trick, end-to-end training from the detector's prediction back to the encoder won't be possible. I would appreciate it if the authors could elaborate on this point.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0KFwhDqTQ6
PSHead: 3D Head Reconstruction from a Single Image with Diffusion Prior and Self-Enhancement
[ "Jing Yang", "Tianhao Walter Wu", "Kyle Thomas Fogarty", "Fangcheng Zhong", "Cengiz Oztireli" ]
In this work, we investigate the problem of creating high-fidelity photorealistic 3D avatars from only a single face image. This task is inherently challenging due to the limited 3D cues and ambiguities present in a single viewpoint, further complicated by the intricate details of the human face (e.g., wrinkles, facial hair). To address these challenges, we introduce PSHead, a coarse-to-fine framework that optimizes 3D Gaussian Splatting for a single image, guided by a mixture of object and face prior to generate high-quality 3D avatars while preserving faithfulness to the original image. At the coarse stage, we leverage diffusion models trained on general objects to predict coarse representation by applying score distillation sampling losses at novel views. This marks the first attempt to integrate text-to-image, image-to-image, and text-to-video diffusion priors, ensuring consistency across multiple views and robustness to variations in face size. In the fine stage, we utilize pretrained face generation models to denoise the rendered noisy images, and use them as supervision to refine the 3D representation. Our method outperforms existing approaches on in-the-wild images, proving its robustness and ability to capture intricate details without the need for extensive 3D supervision.
[ "Diffusion models", "Text to 3D", "Image to 3D", "3D Avatar" ]
https://openreview.net/pdf?id=0KFwhDqTQ6
https://openreview.net/forum?id=0KFwhDqTQ6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yWopUZ7CcY", "otscJHxoFM", "m0QzXevYvv", "lBJVSvdmMy", "LevBbZmOSi" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730671633036, 1729876603989, 1729764755830, 1729845397311, 1731434340082 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission344/Reviewer_9agS" ], [ "ICLR.cc/2025/Conference/Submission344/Reviewer_i9LZ" ], [ "ICLR.cc/2025/Conference/Submission344/Reviewer_8AKB" ], [ "ICLR.cc/2025/Conference/Submission344/Reviewer_P7aq" ], [ "ICLR.cc/2025/Conference/Submission344/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a new approach called PSHEAD for generating high-quality 3D avatars from a single image. The key contribution of this research is the utilization of a mixture of diffusion priors to create a coarse representation of the input face, which is then refined through the integration of 2D face priors. Experiments demonstrate promising results, outperforming several baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper successfully demonstrates the effectiveness of integrating T2I, I2I, and T2V diffusion models into a single framework for generating 3D avatars, showing good performance.\\n2. The paper is well-written and easy to follow. \\n3. The experimental results demonstrate better performance than the baselines in single-view reconstruction.\", \"weaknesses\": \"1. My main concern lies in the technical contributions of this paper. The authors combine multiple models, such as T2I, I2I, and T2V, to achieve state-of-the-art results. They should provide more insights regarding the use of these models in the paper.\\n2. The author should explain why the I2V model was not used and include an ablation study for the I2V model.\\n3. The optimization-based method takes a long time to create a human head Gaussian model, requiring approximately 1.5 hours on a single NVIDIA A100 (80GB) GPU, which makes it difficult to use in practical applications.\", \"questions\": \"1. Can the generated head Gaussian model be driven? If so, please illustrate some novel pose synthesis results.\\n2. Missing some references:\\n [1] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars;\\n [2] DreamHuman: Animatable 3D Avatars from Text;\\n [3] TADA! Text to Animatable Digital Avatars;\\n [4] ZHOU Z., MA F., FAN H., YANG Y. Headstudio: Text to animatable head avatars with 3d gaussian splatting.\\n3. In the ablation study, as Fig. 4 and Tab. 3 show, self-enhancement plays an essential role in generating quality outputs. Does this mean that you do not require the all of diffusion model priors, but that relying on a single diffusion prior, such as T2I combined with self-enhancement, is sufficient? Please provide additional ablation studies, such as T2I + self-enhancement, I2I + self-enhancement, and T2V + self-enhancement. I want to be certain that it is necessary to employ the all of diffusion model prior to distilling the initial head Gaussian model.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"This paper leverages the human face priors (e.g., Face landmarks and Face ID) and numerous 2D diffusion models via SDS to establish a coarse-to-fine pipeline for generating 3D avatars from a single image.\", \"The proposed method consistently surpasses existing techniques (Magic123, DreamGaussian, Era3D, and PanoHead) on PointAvatar, CelebA, and a private dataset, achieving superior quantitative and qualitative results.\", \"Detailed results and corresponding code are included in the supplements.\", \"However, the technical novelty is limited, as it primarily uses existing modules, and the empirical approaches for generating 3D Head Avatars from single images are typical.\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"**1.** It includes a comprehensive review of related works.\\n\\n**2.** The work effectively integrates existing modules and validates the efficacy of critical design components. Furthermore, it addresses a significant problem in the field of 3D Head Reconstruction.\", \"weaknesses\": \"**1.** The work presents incremental methods, mainly refining Head Reconstruction with predictable improvements and relying extensively on off-the-shelf modules such as 2D pre-trained diffusion models, face landmark detection, and ID recognition model (Arcface) for loss function. Specifically\\uff1a\\n - Coarse stage: Employs DreamBooth for personalized T2I diffusion to produce a preliminary 3D-GS.\\n - Fine stage: Utilizes personalized T2I diffusion, landmark-guided ControlNet, and a pre-trained face refinement model (CodeFormer) .\\n\\nThe authors should discuss the design intuition rather than empirically constructing an engineering pipeline.\\n\\n**2.** The complexity of the engineering pipeline, detailed in **Figure 2** and **Section 3**, makes the work hard to follow and may hinder further exploration and industrial applications. \\n\\nThe authors should reduce the number of modules, focusing on core modules as the main claim.\\n\\n**3.** PSHead lacks the capability to drive expressions. \\n\\nUnlike previous works such as HeadGAP and Morphable Diffusion, PSHead does not support expression-driven animation, limiting its applicability to various downstream applications.\\n\\n**4.** The paper omits crucial information about model parameters and reconstruction times compared to cutting-edge 3D generation works (e.g., in **Tables 2** and **Table 3**).\\n\\n**5.** The per-instance optimization process takes approximately 1.5 hours (refer to Implementations), indicating high computational demands.\\n\\n\\nI would appreciate it if the authors could address my concerns by providing corresponding quantitative or qualitative results based on the **weaknesses** and **review feedback**.\", \"questions\": [\"As depicted in **Figure 10**, is PSHead capable of effectively managing tasks involving the reconstruction of the head, upper body, and full body?\", \"Does PSHead exhibit any racial inductive biases?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces an approach to head generation from a single image. The generation process consists of two stages: multiple pretrained models mixed SDS initialization and head-specific refinement. The framework results in realistic\\n$360^{\\\\circ}$ head rendering.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed PSHead follows the pipeline of DreamGaussian, which also includes SDS-based initialization and image-based refinement. The author(s) add several well-designed components, such as DreamBooth, T2V-SDS, and Landmark ControlNet, improving the head generation quality compared to the baseline.\\n2. The paper includes comprehensive experiments to evaluate the effectiveness of each design.\", \"weaknesses\": [\"In the introduction, the author emphasizes that \\\"the normalization preprocessing steps of existing methods struggle in handling cases with varying scales.\\\" However, the results of PanoHead in Fig. 3 (also with shoulders) do not seem that bad. If this is a main motivation, I suggest conducting more comparisons to support it in the main paper rather than only showing a few cases in the appendix.\", \"The results in Fig.2 are not satisfactory, with apparent appearance and shape consistency. The $360^\\\\circ$ videos in supplementary also show severe blur in novel views, especially in back views. In comparison, the results of Panohead are more realistic. It will be better to provide a more detailed analysis of these issues, including potential causes and ideas for improvement.\", \"The added SDS strategy and refinement components lead to severe efficiency degradation, nearly 1.5 hours as reported in implementations. I think the authors should conduct a comparison of runtime vs. quality metrics analysis of the trade-offs between quality improvements and computational cost.\", \"In ablations (section 5), the observation of the gaze direction is intuitive. However, it seems that there are no similar issues in Fig.3.\"], \"questions\": [\"Although the multi-modality (text, image, and video) SDS work, I am not that confident of the motivation. Is it not enough with a single image-to-video model? An analysis or an ablation is necessary.\", \"The GS representation usually results in the degradation of geometry. I hope for more geometry comparisons with Ponahead.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No need ethic review.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates the problem of creating high-fidelity photorealistic 3D avatars from only a single face image. They propose a method that learns a 360\\u25e6 3D-GS representation for a reference image with varying face sizes, leveraging a mixture of diffusion priors to generate a coarse representation and refine the coarse representation in an innovative way by introducing 2D face priors.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed method demonstrates impressive results in generating high-fidelity photorealistic 3D avatars from a single-face image. The use of a 360\\u25e6 3D-GS representation allows for capturing detailed facial features.\", \"weaknesses\": \"1. The paper claims to have achieved great 360 free view rendering. However, upon examining the visual results in the paper, it can be observed that the side view and back view contain excessive noise and are significantly blurrier than the front view. In comparison, it does not appear to be better than PanoHead.\\n\\n2. Many techniques employed in this paper have been used in other papers with similar goals, but they don't address the limitations of these techniques. For example, in the refinement stage, it is unclear how the multi-view inconsistency of refined novel views is handled.\\n\\n3. Mixed SDS. This paper utilizes three types of SDS loss. However, in Figure 4, it seems that T2V SDS only provides marginal enhancements compared to I2I SDS. Although improvements are shown in Table 3, it is not demonstrated whether T2V still performs well when the refinement stage is followed by only T2I + I2I.\\n\\n4. The method section indicates that the geometry is primarily based on the SDS loss. While personalized diffusion models are mentioned, it remains unclear whether the geometry captures intrinsic details and performs better than generic SDS methods.\\n\\n5. The paper reports better numerical results for novel views compared to the comparison methods. However, it is worth noting that most metrics for evaluating novel views are done in feature space rather than pixel space(such as psnr). This could explain why the novel views generated by this method appear blurry, but still achieve higher scores than the baselines.\\n\\n6. The preservation of identity in the rendered avatars from novel views appears to be weak, as observed in Figure 3. In column 4, there is a noticeable change in identity.\", \"questions\": \"1. Questions: I don't have too much question for this paper.\\n2. Suggestions: It is recommended that the authors focus on improving the quality of the side view and back view in order to achieve better results. Additionally, they should validate the effectiveness of using mixed SDS loss by comparing it with one or two SDS loss that can potentially achieve similar performance when combined with the refined stage. Furthermore, conducting evaluation in pixel space for novel views would provide more comprehensive results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
0K1OaL6XuK
Planning Anything with Rigor: General-Purpose Zero-Shot Planning with LLM-based Formalized Programming
[ "Yilun Hao", "Yang Zhang", "Chuchu Fan" ]
While large language models (LLMs) have recently demonstrated strong potential in solving planning problems, there is a trade-off between flexibility and complexity. LLMs, as zero-shot planners themselves, are still not capable of directly generating valid plans for complex planning problems such as multi-constraint or long-horizon tasks. On the other hand, many frameworks aiming to solve complex planning problems often rely on task-specific preparatory efforts, such as task-specific in-context examples and pre-defined critics/verifiers, which limits their cross-task generalization capability. In this paper, we tackle these challenges by observing that the core of many planning problems lies in optimization problems: searching for the optimal solution (best plan) with goals subject to constraints (preconditions and effects of decisions). With LLMs' commonsense, reasoning, and programming capabilities, this opens up the possibilities of a universal LLM-based approach to planning problems. Inspired by this observation, we propose LLMFP, a general-purpose framework that leverages LLMs to capture key information from planning problems and formally formulate and solve them as optimization problems from scratch, with no task-specific examples needed. We apply LLMFP to 9 planning problems, ranging from multi-constraint decision making to multi-step planning problems, and demonstrate that LLMFP achieves on average 83.7\% and 86.8\% optimal rate across 9 tasks for GPT-4o and Claude 3.5 Sonnet, significantly outperforming the best baseline (direct planning with OpenAI o1-preview) with 37.6\% and 40.7\% improvements. We also validate components of LLMFP with ablation experiments and analyzed the underlying success and failure reasons.
[ "LLM Planning", "Code generation", "LLM Tool-Use" ]
Accept (Poster)
https://openreview.net/pdf?id=0K1OaL6XuK
https://openreview.net/forum?id=0K1OaL6XuK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yBHitIaqRu", "xtox6YgzIe", "waEPZm3qaI", "uvJE94XU3X", "tx6ZOEJm7y", "ruLHRq3g7S", "rBPqG4U3Gp", "pRs2RUaD4f", "oLlc7XmjJ5", "mhaMSvVr6p", "mdB3qGeQHx", "lCcCNVSa3v", "ks3sjYFZoZ", "kXPWGQYnhR", "iqN0OqSvo8", "eKr5ZdlQiv", "e2HySta6zf", "d4lqmi6qrr", "cY5CzVowjb", "bvgqvePoX2", "bDWbt6Uvnu", "aHIDnL0Nrr", "Rs3Ae1X5R7", "QheidjP7ST", "PTw0V6g35a", "OLzWBgO6Lu", "JbRaU7l7Ri", "JYOygv4LDB", "JLUp1XZhHG", "J1GDDRxqt5", "HeUsqxZ9bM", "GYQrrQ7ncD", "FqZ0wHoWcR", "FKU5SSz1Fc", "CeHYCksRdF", "AFBJmQuQSX", "AAAFUPuSTK", "8zy7OfFQY8", "8q9yzrRWB9", "530SGzz34o" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732552561390, 1732354845188, 1732633637252, 1732354165104, 1732417884255, 1732366564322, 1732361009460, 1732633895092, 1732360292252, 1730765666688, 1732353748376, 1732358819222, 1734741922490, 1732358767510, 1732725061141, 1732633820150, 1732633520491, 1730588357492, 1732354756980, 1732358513849, 1733079817224, 1733079926657, 1732358903569, 1732353715328, 1732725106690, 1732358700353, 1732753755036, 1732725168919, 1730673232356, 1732358586073, 1732725198151, 1732360237014, 1729676309164, 1737523739888, 1732808255142, 1732360462484, 1732839895059, 1732417977505, 1733179097645, 1732354121897 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6029/Reviewer_FxuD" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Reviewer_dysu" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Reviewer_qLBs" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Area_Chair_G5k4" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Reviewer_dysu" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Reviewer_dysu" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Reviewer_8M6X" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Reviewer_FxuD" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6029/Reviewer_FxuD" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ], [ "ICLR.cc/2025/Conference/Submission6029/Reviewer_dysu" ], [ "ICLR.cc/2025/Conference/Submission6029/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Authors comment\", \"comment\": \"Thank you for providing the detailed corrections in the revised version of the paper and mentioning the theoretical insights about the results. In the final version of the paper, it would be great to see the theoretical insights as well apart from comments provided here. As I really appreciate this and these the important statement not just experimental results.\\nHowever, I am still not convinced with your explanation regarding W3 \\\"general approach, which does not require task-specific examples or task-specific efforts\\\". Upon revisiting the appendix section I observed that for each setup different prompts are created say multiple code generator prompts and seems claim is understating the prompt engineering done to achieve the goal. Don't we need new prompt engineering for each task, each section and each domain? If yes then I would say framework is not very generic as stated. Also this is stated in your response \\n>\\\"The user description of the task indeed needs to be elaborate and accurate\\\".\\n\\nCan you point me to other existing work prompts which require more elaborate prompting compared to your work and differences compared to your prompt design?\"}", "{\"title\": \"Response to Reviewer 8M6X 2/2\", \"comment\": \"**8M6X-Q3: Definition of planning problems**\", \"8m6x_a3\": \"We extend the classical planning problem and define our planning problem as a tuple $P= \\\\langle \\\\mathcal{S}, \\\\mathcal{A}, \\\\mathcal{C}, \\\\mathcal{T}, \\\\mathcal{I}, \\\\mathcal{G}, f\\\\rangle$, where $\\\\mathcal{S}$ is a finite set of states, $\\\\mathcal{A}$ is the set of actions, $\\\\mathcal{C}$ is a set of constraints, $\\\\mathcal{T}: \\\\mathcal{S} \\\\times \\\\mathcal{A} \\\\rightarrow \\\\mathcal{S}$ is the transition function, $\\\\mathcal{I} \\\\subseteq \\\\mathcal{S}$ is the initial state, $\\\\mathcal{G} \\\\subseteq \\\\mathcal{S}$ is the set of goal states, and $f: \\\\mathcal{S} \\\\rightarrow \\\\mathbb{R}$ is the cost function.\", \"for_the_coffee_supply_chain_example\": [\"States $\\\\mathcal{S}$: all possible states of raw/roasted/shipped coffee\", \"Actions $\\\\mathcal{A}$: source coffee bean, roast beans into dark or light coffee, ship roasted coffee\\u2026\", \"$\\\\mathcal{T}$: how state changes after actions\", \"Initial state $\\\\mathcal{I}$: all coffee beans are still in suppliers,\", \"Goal state $\\\\mathcal{G}$: coffee are roasted and shipped to cafes, fulfilling cafe demands\", \"Constraints $\\\\mathcal{C}$: explicit and implicit constraints, such as shipped coffee beans can not exceed supplier capacity, shipped coffee from roastery can not exceed received coffee beans, etc\\u2026\", \"f: calculate total cost\", \"Then the planning problem involves delivering a plan $\\\\pi$ accomplishing the task specified considering all constraints at the cheapest cost.\", \"**8M6X-Q4: Why not use a symbolic planner**\"], \"8m6x_a4\": \"* All symbolic planners have some learning curve. To use symbolic planners, the end-users have to be the experts who understand, interpret, and program the problem to be utilized by the symbolic planners. We imagine that this means the end-users would require at least a bachelor\\u2019s degree in CS or relevant majors.\\n* To use our framework, the end-users are **anyone who speaks natural languages**. LLMs act as the interface with the end users who can use their daily spoken language and can **generalize** to different user **queries** and even different **tasks** with **no task-specific efforts**.\\n\\nLike other LLM tool-using frameworks [4,5], the purpose of LLM tool-using is to allow end-users to solve complex problems without becoming experts in using those tools. For example, it often requires at least 2 years for a graduate student to understand SMT solvers. Now, our framework enables common end-users to develop plans efficiently and rigorously without knowing what an SMT solver is.\\n\\n\\n[1] Webb, T., Mondal, S. S., Wang, C., Krabach, B., & Momennejad, I. A Prefrontal Cortex-inspired Architecture for Planning in Large Language Models. arXiv preprint arXiv:2310.00194.\\n\\n[2] Fabiano, F., Pallagani, V., Ganapini, M. B., Horesh, L., Loreggia, A., Murugesan, K., ... & Srivastava, B. Plan-SOFAI: A Neuro-Symbolic Planning Architecture. In Neuro-Symbolic Learning and Reasoning in the era of Large Language Models, 2023. \\n\\n[3] Katz, M., Kokel, H., Srinivas, K., & Sohrabi, S. Thought of Search: Planning with Language Models Through The Lens of Efficiency. In The First Workshop on System-2 Reasoning at Scale, 2024\\n\\n[4] Liang, Jacky, et al. \\\"Code as policies: Language model programs for embodied control.\\\" 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023.\\n\\n[5] Liu, Bo, et al. \\\"Llm+ p: Empowering large language models with optimal planning proficiency.\\\" arXiv preprint arXiv:2304.11477 (2023).\"}", "{\"title\": \"Response to Reviewer FxuD - Round 2 2/4\", \"comment\": \"**PART (b): Our task-specifc description - an example**\\\\\\nNext, to further clarify the task-specific efforts for prompt design, we use the Blocksworld task as an example.\\\\\\nFor multi-step problems, $\\\\color{#2986cc}{\\\\textsf{task descriptions}}$ and $\\\\color{#38761d}{\\\\textsf{queries}}$ are the **only task-specific prompts** needed from users. For the blocksworld problem, the $\\\\color{#2986cc}{\\\\textsf{\\\\textbf{task descriptions}}}$ is: \\n\\n \\u201cThe robot has four actions: pickup, putdown, stack, and unstack. The domain assumes a world where there are a set of blocks that can be stacked on top of each other, an arm that can hold one block at a time, and a table where blocks can be placed.\", \"the_actions_defined_in_this_domain_include\": \"\", \"pickup\": \"allows the arm to pick up a block if the block is clear, the block is on_table, and the arm is empty. After the pickup action, the arm will be holding the block thus not empty, and the block will no longer be on_table or clear.\", \"putdown\": \"allows the arm to put down a block if the arm is holding a block. After the putdown action, the arm will be empty thus not holding the block, and the block will be on_table and clear.\", \"stack\": \"allows the arm to stack a block on top of another block if the arm is holding the top block and the bottom block is clear. After the stack action, the arm will be empty thus not holding the block, the top block will be clear and on top of the bottom block, and the bottom block will no longer be clear. unstack: allows the arm to unstack a block from on top of another block if the top block is on the bottom block, the arm is empty, and the top block is clear. After the unstack action, the arm will be holding the top block thus not empty, the top block will no longer be on top of the bottom block and not clear, and the bottom block will be clear.\\u201d,\\n\\n**and one example** $\\\\color{#38761d}{\\\\textsf{\\\\textbf{query}}}$ **is:**\\n\\n You have 4 blocks. \\n b is on top of c.\\n c is on top of d.\\n d is on top of a.\\n a is on the table.\\n b is clear.\\n Your arm is empty.\\n Your goal is to move the blocks. \\n a should be on top of c.\\n d should be on top of a.\\u201d\\n\\nPlease note that the above task-specific effort is the minimum effort needed to describe the problem, without which the planning problem would become ill-defined. We will show that this does not need heavy description engineering with a paraphrasing experiment.\\n\\nOther than that, all the prompt skeletons for different components, including the prompts for Definer, Formulator, Code Generator, Result Formatter, Self assess & Modification, are task-agnostic. The corresponding prompts are listed in Pages 43-52.\"}", "{\"title\": \"Response to Reviewer qLBs 4/4\", \"comment\": \"**qLBs-Q5: Why does Encoding to PDDL requires more human efforts than LLMFP**\", \"qlbs_a5\": \"We would like to clarify that we do not mean encoding to PDDL inherently requires more human effort than encoding to SMT. Instead, existing methods using LLM+PDDL need human efforts [1-5]. Specifically, [1,3,4] needs task-specific in-context examples, [2] needs human corrections from experts, and [1,5] needs an existing PDDL domain file. We listed a comprehensive set of papers in the related work session and discussed their major differences from LLMFP.\\n\\n**qLBs-Q6: How do you encode the length of the plan?**\", \"qlbs_a6\": \"* For multi-constraint problems such as Coffee, Workforce, Facility, Task_allocation, and Warehouse, since they are inherently combinatorial optimization problems, it is encoded as a single-step problem. \\n* For multi-step problems such as Blocksworld, Mystery Blocksworld, Movie, and Gripper, we do not give LLMFP a fixed horizon. Instead, we use interactive deepening. The solver starts from timestep=1 to check satisfaction. If the solver finds a satisfiable solution for a given timestep, it is guaranteed to be the optimal solution; if the solver finds no solution given a timestep, it adds one more step and repeats the process. This process is repeated until reaching a predefined limit set by us. \\n\\nThe requirement predefined limit is a shortcoming of the SMT solver. In the future, we would love to extend our work to explore methods that could help mitigate the runtime issue for large-scale problems. Some possible directions are: 1) introducing methods to estimate the lower and upper bounds of step numbers needed, 2) developing heuristics to prioritize some possible options first, and 3) developing methods that put attention on a part of the map and ignore the unnecessary positions in the map. In addition, one advantage of the LLMFP is that it can work with different solvers. Thus, we will also explore other solver options for long-horizon tasks.\\n\\n[1] Liu, Bo, et al. \\\"Llm+ p: Empowering large language models with optimal planning proficiency.\\\" arXiv preprint arXiv:2304.11477 (2023).\\n\\n[2] Guan, Lin, et al. \\\"Leveraging pre-trained large language models to construct and utilize world models for model-based task planning.\\\" Advances in Neural Information Processing Systems 36 (2023): 79081-79094.\\n\\n[3] ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning.\\n\\n[4] Xie, Yaqi, et al. \\\"Translating natural language to planning goals with large-language models.\\\" arXiv preprint arXiv:2302.05128 (2023).\\n\\n[5] Silver, Tom, et al. \\\"Generalized planning in pddl domains with pretrained large language models.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 18. 2024.\\n\\n[6] Gundawar, Atharva, et al. \\\"Robust Planning with LLM-Modulo Framework: Case Study in Travel Planning.\\\" arXiv preprint arXiv:2405.20625 (2024).\"}", "{\"comment\": \"### Q: Coffee problem can be easily framed as a max-flow min-cost network?\", \"a\": \"Thank the reviewer for pointing this out! We agree that the fact that the problems can be formulated as MILP instances do not mean they are NP-Hard. We made some mistakes on the complexity analysis for the two instances of MILP problems: Coffee and Workforce (only the base setup). We have corrected them and revised the paper (Appendix A.1 Complexity Analysis). The new analysis is shown below. We would appreciate any corrections from your expertise:\\n* The coffee problem can be framed as a max-flow problem, which can be solved in polynomial time. Specifically, some algorithms can solve the max-flow problem with $O(VE)$ or $O(V^2E)$\\n* The workforce problem, with no additional constraint, can also be framed as a max-flow problem. However, different types of constraints are added by users to form different instances. Some types of queries can increase the complexity. For example, \\\"What if Gu and Bob cannot work on the same day?\\\". **Adding constraints to introduce conflicting workers turns the problem to be as hard as a maximum independent set problem (also NP-Hard)**, where we add an edge between the conflicted workers, and finding the maximum independent set. \\n* The facility problem **is a NP-Hard** problem because it is encoded as Capacitated Facility Location Problem(CFLP). The formal definition of CFLP is as below: \\n\\n $\\\\min \\\\sum_{i=1}^n \\\\sum_{j=1}^m c_{ij} d_j y_{ij} + \\\\sum_{i=1}^n f_i x_i$\\n\\n $\\\\text{s.t.} \\\\quad \\\\sum_{i=1}^n y_{ij} = 1 \\\\quad \\\\text{for all } j = 1, \\\\ldots, m$\\n\\n $\\\\sum_{j=1}^m d_j y_{ij} \\\\leq u_i x_i \\\\quad \\\\text{for all } i = 1, \\\\ldots, n$\\n\\n $y_{ij} \\\\geq 0 \\\\quad \\\\text{for all } i = 1, \\\\ldots, n \\\\text{ and } j = 1, \\\\ldots, m$\\n\\n $x_i \\\\in \\\\{0, 1\\\\} \\\\quad \\\\text{for all } i = 1, \\\\ldots, n$\\n\\n where $x_i = 1$ if facility $i$ is open, and $x_i = 0$ otherwise. $y_{ij}$ for $i = 1, \\\\ldots, n$ and $j = 1, \\\\ldots, m$, which represents the fraction of the demand $d_j$ filled by facility $i$.\\n\\nDespite the mistake, **at least 5 out of 9 planning problem categories (Facility, Task Allocation, Warehouse, Blocksworld, Mystery Blocksworld) we experimented on are proved to be NP-Hard**. Some instances of the rest 4 planning problems can also be NP-Hard because users queries can introduce additional constraints. This shows the ability of LLMFP to solve diverse planning problems including NP-Hard problems, and its capability to understand and generalize to different natural language constraint additions/modifications. We believe that the correction on Coffee and Workforce does not detract from the core contributions and significance of our work. We sincerely appreciate the reviewer for catching this and helping strengthen our revised draft.\"}", "{\"comment\": \"Thank you for your extensive responses and extensive additional experiments, I deeply appreciate that. I have a few follow-up questions.\\n\\n> For 5 multi-constraint problems, they are all NP-hard problems. [...] these 3 problems are built as Mixed-integer linear programming (MILP) problems. [...] As MILP is known to be NP-hard, the first 3 problems are NP-hard.\\n\\nI disagree. The fact that a problem is an instance of MILP doesn't mean it's NP-hard. Note that e.g. simple path finding in a graph can also be framed as MILP. And it seems that e.g. the Coffee problem can be easily framed as a max-flow min-cost network (even for arbitrary number of suppliers, roasters, retailers, and coffee roast colours) and hence can be solved in polynomial time. It's hard to find a precise definition of other tasks to check.\\n\\n> we tested LLMFP on the Sokoban environment.\\n\\nThank you for that experiment. While <=6x6 instances with <=2 boxes is not the top complexity one could expect, I acknowledge that it (a) is indeed a non-trivial widely-used test (provided that those instance were not chosen to be super-simple), and (b) is enough to show the superiority of LLMFP over baselines. It would be interesting to push it even further and see how far it can go, but that's just for your consideration, you don't have to show me additional experiments. Could you instead share how long does it take to solve those instances?\\n\\n> In Appendix A.4 we included the time and cost analysis for LLMFP. As the reviewer suggested, we add time and cost comparisons for all methods.\\n\\nThank you for adding that. Please clarify what exactly means \\\"Average wall time (s) per **query**\\\" ? What exactly is a query? Is it a single problem instance? Are the queries evenly distributed among the methods? What's the average time required to solve a single instance for LLMFP (that's the most informative metric for me)?\"}", "{\"title\": \"Revision Summary: additional experiments, discussions, and draft revisions\", \"comment\": \"We thank all reviewers for their thoughtful comments and suggestions! To help reviewers to better access the updates we have made, we include this summary of revisions as below:\", \"to_summarize_the_additional_experiments_and_discussions_we_added\": [\"We added experiments on a complex **Sokoban** tasks for 15 queries\", \"LLMFP achieves optimal rates of **80%**, outperforming all baselines.\", \"Appendix A.1\", \"We added a baseline **Code_SMT** that is forced to use SMT for code generation for all tasks and LLMs\", \"LLMFP outperforms Code_SMT, which achieves an average of 2.7% and 62.4% for multi-constraint tasks, and 1.0% and 0.0% for multi-step tasks across two LLMs GPT-4o and Claude 3.5 Sonnet\", \"Table 1, Table 2, Section 4.2\", \"We provide results with **success rate** as the metric across all 9 tasks and for both LLMs\", \"LLMFP still outperforms the baselines\", \"Appendix A.3\", \"We added experiments that **explicitly** instruct baselines to output optimal solutions for multi-step problems\", \"LLMFP still outperforms the baselines\", \"Appendix A.7\", \"We added experiment to test **PDDL-based** approaches on non-PDDL problems\", \"PDDL-based approaches cannot handle non-PDDL planning problems (multi-constraint problems)\", \"qLBs-A1\", \"We added theoretical insights of why LLMFP outperforms baselines. We added experiment to directly include **formal mathematical definition** of Coffee as task description in prompt, and test Direct with o1-preview\", \"LLMs cannot understand and solve an optimization problem. o1-preview only achieves 34.2%\", \"Appendix A.5.4\", \"We added **wall time** comparison and **cost** comparison\", \"LMFP runtime and cost are reasonable, comparable to using o1-preview\", \"Appendix A.4\", \"We added **complexity** analysis and **failure cases** analysis\", \"All multi-constraint tasks, Blocksworld, and Mystery Blocksworld are proved to be NP-Hard\", \"Appendix A.1, Appendix A.5\", \"We added experiments to **paraphrase** task descriptions\", \"LLMFP is not sensitive to specific wordings of task descriptions\", \"Appendix A.10.4\", \"In addition to above additions, we revised the following parts of the main paper to make the presentation clearer and colored the modifications/additional results and discussions with blue:\", \"Section 2 Related Works: we add more citations\", \"Section 3.1 Overview, 3.2 Definer, 3.3 Formulator: we revised the presentation\", \"Figure 2: we replaced Figure 2 with two to-the-point examples of the JSON representation\", \"Section 4.2: we added discussions of new baseline Code_SMT\"]}", "{\"title\": \"Response to Reviewer FxuD - Round 2 4/4\", \"comment\": \"**PART (d): The flexibility of our task-specific effort**\\\\\\nTo prove the flexibility of our prompt, we also **paraphrase our NL task description and re-test the framework with a paraphrased description**. The paraphrasing is performed by LLMs.\\\\\", \"one_example_paraphrased_description_is\": \"In this blocksworld problem, a robot arm can perform four actions: pickup, putdown, stack, and unstack. The environment consists of blocks that can be stacked, a single-block capacity arm, and a table.\", \"pickup\": \"The arm can lift a block if it's clear, on the table, and the arm is empty. This results in the arm holding the block, which is no longer on the table or clear.\", \"putdown\": \"If the arm is holding a block, it can place it on the table. This leaves the arm empty and the block on the table and clear.\", \"stack\": \"The arm can place a block it's holding onto another clear block. This empties the arm, makes the top block clear and on the bottom block, while the bottom block becomes unclear.\", \"unstack\": \"If a clear block is on another block and the arm is empty, it can lift the top block. This results in the arm holding the top block (no longer clear or on the bottom block), while the bottom block becomes clear.\\n\\nWith LLM-paraphrased random task descriptions, we test on 50 queries in Blockworld with Claude 3.5 Sonnet and shows LLMFP is still able to correctly generate 46/50 plans, reaching a high optimal rate of **92%, significantly outperforming baselines**. This shows our framework is **not sensitive** to the specific wordings of the task description, as long as they have adequate information. We include the result and analysis in Appendix A.10.4. We can add more paraphrasing examples to show the robustness of LLMFP to different user inputs, if the reviewer finds it helpful to show the generalizability of LLMFP.\", \"table\": \"Optimal rate (%) comparison of LLMFP with baselines with paraphrased prompts on Blocksworld with Claude 3.5 Sonnet\\n| Direct | CoT| Code| Code_SMT | LLMFP |\\n|--------------------------|-----------------------|-----------------------|---------------------------|-------------------------|\\n| 32.0 | 46.0 | 0.0 | 0.0 | **92.0** |\\n\\n**PART (e): Clarification on Code Generator prompt**\\\\\\nIn the comment you mentioned that \\u2018Upon revisiting the appendix section I observed that for each setup different prompts are created say multiple code generator prompts\\u2019 We want to clarify that we only have **one** Code Generator prompt, which is listed on pages 50-51. We are wondering if any part of the paper makes the reviewer feel there are multiple, and we would love to provide any further clarifications to avoid the confusion.\\n\\nHope the explanation, examples, and comparisons can make things clearer. We would love to provide further clarification! Thank the reviewer for careful consideration and helpful feedback!\\n\\n[1] Liu, Bo, et al. \\\"LLM+P: Empowering large language models with optimal planning proficiency.\\\" arXiv preprint arXiv:2304.11477 (2023).\\\\\\n[2] ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning\\\\\\n[3] Xie, Yaqi, et al. \\\"Translating natural language to planning goals with large-language models.\\\" arXiv preprint arXiv:2302.05128 (2023).\"}", "{\"title\": \"Response to Reviewer FxuD 2/3\", \"comment\": \"**FxuD-Q4: Theoretical insights regarding performance**\", \"fxud_a4\": \"Our insight is that LLMs are good at understanding the syntax and semantics of planning problems as optimization problems but are not good at solving optimization problems directly. Specifically,\\n\\n**Next-token prediction is fundamentally different from deterministic algorithms for optimization.** There\\u2019s a growing belief that next token prediction cannot truly model human thought and cannot support human-like capabilities of understanding a problem, imagining, curating, and backtracking plans before executing [1-3]. Specifically, the claim that next token predictions are \\u201cill-suited for planning tasks\\u201d is supported by works [4-7], which tested the planning capabilities of LLMs on various planning tasks. These works empirically show that in addition to identifying patterns in language and predicting the next word in a sequence, LLMs still can not truly understand a problem and thus do not have the capability to perform intense calculations to optimize for any objectives. Thus, this is a major reason why baselines are not capable of solving the complex planning problems in our paper. However, since LLMFP teaches LLMs to build the optimization problem step by step and calls the external solver to solve for a plan, this bypasses the need to devise a plan by LLMs themselves.\\n\\n**LLMs cannot understand and solve an optimization problem. To support this claim,** we conduct an experiment on the Coffee task that, instead of using natural language task descriptions as inputs, we directly map this Coffee task to an optimization problem and use the formal mathematical definition (refer to page 16 for the detailed formal definnition) of this problem as the inputs to LLMs. Thus, LLMs do not need to understand the problem and find the underlying constraints, as a formal definition is given and could be directly solved. \\n\\nWe tested Direct with the most powerful LLM OpenAI o1-preview model on all queries of Coffee, which only achieves an optimal rate of **34.2%**. Compared to its 25.9% optimal rate with natural language task description, this is not a significant improvement, given all goals and constraints are clearly formally specified in the new setting. This is consistent with the conclusion that LLMs still cannot solve optimization problems by themselves, even given a formal representation. **LLMFP enables LLMs to formalize planning problems as optimization problems.** Since SMT solvers can guarantee to return correct answers given correct input, the high optimal and success rate of LLMFP indicates that LLMFP allows LLM to parse the correct syntax and semantics information of a planning problem from its natural language description to a formal mathematical description. Such translation is also non-trivial when no task-specific examples are provided. As shown in our newly added baseline approach Code_SMT as shown in Table 1 and 2, when we directly ask LLMs to translate and encode the natural language task description in an SMT format, the optimal rate is low, with an average of 2.7% and 62.4% for multi-constraint tasks, and 1.0% and 0.0% for multi-step tasks across two LLMs GPT-4o and Claude 3.5 Sonnet.\"}", "{\"summary\": \"This paper addresses the problem of solving planning problems that are given in natural language. The proposed algorithm they propose \\u2013 LLMFP - is a workflow of multiple LLMs, including an LLM to extract variables and constraints from the text, an LLM to formulate the extracted variables and constraints as an SMT problem in a specific format, an LLM to convert this format to code that can be run by an SMT solver, and an LLM to verify and correct mistakes by the other LLMs. This LLM workflow is evaluated against the other LLM-based methods to solve planning problems, including one that is similar to the LLMFP but creates PDDL instead of an SMT problem. The authors also examine how results can be better by adding some task-specific expertise. The results over a set of benchmark problems show that LLMFP is, in general, much better than the baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Strength\", \"The paper is in general clear (even if it is sometimes hand-wavey)\", \"The problem is interesting and the related work seems to cover all bases\", \"The results are impressive, and much better than the baselines.\", \"The proposed workflow makes sense and works well.\"], \"weaknesses\": [\"I\\u2019m not sure if the novelty of the proposed work over the PDDL-based approach is sufficiently novel for a top conference.\", \"The appendix is huge (~40 pages!). This seems to me not reasonable, as the main paper should be self contained.\", \"The presentation is too much hand-wavy. It would be great to try to capture more of it in a more formal manner\"], \"questions\": \"1. As the authors noted, LLMs have been used to translate natural language to planning problems. Similarly, the mapping from planning to SMT is well known in the planning literature. So, is the novelty is limited to combining the two ideas together??\\n2. In page 3, just above the first paragraph, you seem to say that encoding to PDDL requires more human effort than encoding to SMT. Can you elaborate why?\\n3. How do you encode the length of the plan? When compiling planning to SAT or SMT, this is an issue because the solver (SAT/SMT) requires to set an upper bound while in PDDL it does not have too.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer qLBs 2/4\", \"comment\": \"**qLBs-Q2: Appendix length**\", \"qlbs_a2\": \"The main reason why our appendix is long is that our framework is primarily a prompt-based framework with 5 components, and we tested LLMFP on 9 different tasks. In an effort to promote reproducibility and transparency, we put all prompts, task inputs, and task outputs in the appendix, all of them are lengthy. In addition, our framework has 5 components and we tested it on 9 different tasks, which makes our logs even longer. We believe that our main paper is self-contained since it includes all important descriptions, results, and analyses. We hope our effort to ensure transparency and reproducibility could be understood and appreciated.\\n\\n**qLBs-Q3: Hand-wavy presentation**\", \"qlbs_a3\": [\"We updated our draft and added some formal definitions and descriptions to make the presentation more rigorous and structured. Specifically,\", \"We have rewritten section 3.1 with a formal definition of the optimization problem\", \"We have modified sections 3.2 and 3.3 with more mathematical formulation to better illustrate our method\", \"We have also adjusted the description of 3.3 to make it more accessible to readers\", \"Please refer to our updated draft. All the changes are highlighted in blue.\"]}", "{\"title\": \"Response to Reviewer dysu 5/6\", \"comment\": \"**dysu-Q6: Are the methods explicitly instructed to provide optimal solutions?**\", \"dysu_a6\": \"For all methods, including LLMFP and baselines, we describe the goal of each multi-constraint task in the task description. For example, for the Coffee task, the task description *\\u201c...**The company's objective is to minimize the total cost**, including shipping beans, roasting, and shipping roasted coffee, while ensuring that all coffee produced meets or exceeds the demand at each retail location\\u201d* implicitly shows the goal is to find the plan that minimizes the total cost. LLMFP uses an SMT solver, which can **optimize the objective and guarantees to find the optimal solution if the formulation and generated codes are correct**. However, since all representations and codes are generated by LLMs, they are not 100% correct, so the success rates are not 100%.\\n\\n\\nThe methods are not explicitly instructed to provide optimal solutions for multi-step problems. However, since SMT solver guarantees to find the solution if there exists one, it can rigorously show the solution does not exist for smaller timesteps and increase timestep, thus can always find the optimal solution if the formulation and generated codes are correct (similarly, the success rates are not 100%). This is an advantage of incorporating a complete and sound solver like SMT in our framework. \\n\\n\\nHowever, to better understand the capabilities of baselines, we modify the baseline prompts to explicitly instruct them to find the optimal solution and re-evaluate them as Direct_Opt, CoT_Opt, Code_Opt, and Code_SMT_Opt on the 4 multi-step problems. We include the optimal rates in the table below and also in Appendix A.7. Compared with Table2, we could observe that some baselines achieve better performance (from average 0.1% to 16.4% for Code_Opt_GPT-4o, and from average 30.9% to 36.7% for CoT_Opt_GPT-4o), while some achieve slightly worse performance(average 68.1% to 67.0% for Direct_Opt_o1-preview). However, **despite the changes due to the explicit instruction to find the optimal plan, LLMFP still could largely outperform all baselines.**\", \"table_7\": \"Optimal rate (%) comparison of LLMFP with baselines that explicitly instructed to generate optimal plans on 4 multi-step problems.\\n| Method | Blocksworld | Mystery Blocksworld | Movie | Gripper | Average |\\n|-----------------------------|-------------|----------------------|--------|---------|---------|\\n| Direct_Opt GPT-4o | 35.2 | 0.8 | **100.0** | 0.0 | 34.0 |\\n| Direct_Opt o1-preview | 80.9 | 39.0 | **100.0** | 48.0 | 67.0 |\\n| CoT_Opt GPT-4o | 33.4 | 2.3 | 95.2 | 16.0 | 36.7 |\\n| Code_Opt GPT-4o | 0.0 | 3.8 | 61.9 | 0.0 | 16.4 |\\n| Code_SMT_Opt GPT-4o | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| LLMFP_GPT-4o | **96.2** | **77.7** | **100.0** | **76.0** | **87.5** |\\n||\\n| Direct_Opt Claude 3.5 Sonnet| 40.9 | 1.5 | **100.0** | 20.0 | 40.6 |\\n| CoT_Opt Claude 3.5 Sonnet | 52.5 | 4.5 | **100.0** | 20.0 | 44.2 |\\n| Code_Opt Claude 3.5 Sonnet | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| Code_SMT_Opt Claude 3.5 Sonnet| 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| LLMFP_Claude 3.5 Sonnet | **93.0** | **98.0** | **100.0** | **76.0** | **91.8** |\"}", "{\"metareview\": \"The paper presents LLM-Based Formalized Programming (LLMFP), a framework for incorporating LLM's to solve natural language planning tasks. The framework uses an LLM iteratively with external planning tools to create a viable solution. The LLM is used to extract variables and constraints from text, construct and parse an instance of a SMT formula from the text, and catch errors in the process via another LLM. The experiments demonstrate improved performance compared to a direct application of an off-the-shelf LLM.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided detailed responses to the concerns raised by the reviewers. Most reviewers concur that the bar for acceptance has been reached. Unfortunately, reviewer 8M6X did not engage with the authors or other reviewers during the rebuttal and discussion periods despite recommending rejection. Furthermore, the review does not provide sufficient grounds for providing this paper the lowest possible rating of 1. To be fair to the authors I am not taking 8M6X's rejection recommendation into consideration for the final decision. The other reviewers have unanimously recommended acceptance. As it stands, especially accounting for the changes already incorporated in the paper after the rebuttal period, the paper makes a significant enough contribution to warrant acceptance.\"}", "{\"title\": \"Response to Reviewer dysu 4/6\", \"comment\": \"**dysu-Q5: Code variant that is forced to use SMT**\", \"dysu_a5\": \"We thank the reviewer for bringing this up! We added another baseline called Code_SMT and evaluated it on all tasks and both LLMs. We include the main result in Table 1 and Table 2, and more discussions in the revision.\\n\\nFor multi-constraint problems, although Code_SMT_GPT-4o performs poorly, Code_SMT_Claude 3.5 Sonnet is able to improve 18.2% compared to Code_Claude 3.5 Sonnet. This showcases the strong coding capability of Claude 3.5 Sonnet, especially the capability to understand and utilize the SMT solver. At the same time, it showcases the instability of different LLMs in reaching strong performance, motivating the need for frameworks like LLMFP that could overcome the existing limitations of LLMs.\\n\\nBoth LLMs perform poorly for multi-step problems. As mentioned in dysu-Q3, we discussed the failure cases in Appendix A.5.\", \"table_5\": \"Optimal rate (%) comparison of LLMFP with baselines on 5 multi-constraint problems.\\n| Method | Coffee | Workforce | Facility | Task Allocation | Warehouse | Average |\\n|---------------------------------|--------|-----------|----------|-----------------|-----------|---------|\\n| Direct_GPT-4o | 0.8 | 2.6 | 0.0 | 0.0 | 0.0 | 0.7 |\\n| Direct_o1-preview | 25.9 | 47.6 | 4.8 | 4.0 | 66.0 | 29.7 |\\n| CoT_GPT-4o | 0.0 | 5.6 | 0.0 | 0.0 | 16.0 | 4.3 |\\n| Code_GPT-4o | 17.7 | 75.8 | 53.9 | 0.0 | 8.0 | 31.1 |\\n| Code_SMT_GPT-4o | 0.0 | 10.8 | 0.6 | 0.0 | 2.0 | 2.7 |\\n| LLMFP_GPT-4o | **64.7** | **92.2** | **70.7** | **96.0** | **72.0** | **79.1** |\\n||\\n| Direct_Claude 3.5 Sonnet | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| CoT_Claude 3.5 Sonnet | 7.1 | 0.0 | 0.0 | 0.0 | 14.0 | 4.2 |\\n| Code_Claude 3.5 Sonnet | 59.8 | 71.9 | 47.3 | 0.0 | 42.0 | 44.2 |\\n| Code_SMT_Claude 3.5 Sonnet | 75.6 | 36.8 | **49.7** | 86.0 | 64.0 | 62.4 |\\n| LLMFP_Claude 3.5 Sonnet | **80.5** | **88.7** | 48.2 | **96.0** | **90.0** | **80.7** |\", \"table_6\": \"Optimal rate (%) comparison of LLMFP with baselines on 4 multi-step problems.\\n| Method | Blocksworld | Mystery Blocksworld | Movie | Gripper | Average |\\n|---------------------------------|-------------|----------------------|--------|---------|---------|\\n| Direct_GPT-4o | 41.5 | 0.8 | 85.7 | 0.0 | 32.0 |\\n| Direct_o1-preview | 88.4 | 31.9 | **100.0** | 52.0 | 68.1 |\\n| CoT_GPT-4o | 39.9 | 2.7 | 81.0 | 0.0 | 30.9 |\\n| Code_GPT-4o | 0.0 | 0.3 | 0.0 | 0.0 | 0.1 |\\n| Code_SMT_GPT-4o | 0.0 | 0.0 | 0.0 | 4.0 | 1.0 |\\n| LLMFP_GPT-4o | **96.2** | **77.7** | **100.0** | **76.0** | **87.5** |\\n||\\n| Direct_Claude 3.5 Sonnet | 43.2 | 0.5 | **100.0** | 12.0 | 38.9 |\\n| CoT_Claude 3.5 Sonnet | 52.8 | 2.8 | **100.0** | 28.0 | 45.9 |\\n| Code_Claude 3.5 Sonnet | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| Code_SMT_Claude 3.5 Sonnet | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| LLMFP_Claude 3.5 Sonnet | **93.0** | **98.0** | **100.0** | **76.0** | **91.8** |\"}", "{\"title\": \"Message from authors -- last day of draft revision\", \"comment\": \"Dear reviewer qLBs,\\n\\nWe sincerely appreciate your time and efforts in evaluating our paper. Since this is the last day we can revise the draft, we would like to confirm whether our responses have effectively addressed your concerns. To summarize, in our response, \\n* We explained and demonstrated with experiments the novelty of LLMFP over PDDL-based approaches and LLM translation + SMT\\n* We updated the paper to make the presentation more rigorous and structured \\n* We answered the questions about the effort comparison of LLMFP and PDDL-based approaches and LLMFP encoding length\\n\\nWe also included other important updates and additional experiments we have done in the Revision Summary. If there are any additional points you'd like us to discuss or consider, please do not hesitate to let us know. Your insights have been invaluable, and we're grateful for your feedback on our work. We look forward to further discussions with you!\\n\\nBest,\\\\\\nAuthors of paper 6029\"}", "{\"title\": \"Response to Reviewer FxuD - Round 2 3/4\", \"comment\": \"**PART (c): Comparison of task-specific efforts with other baselines**\\\\\\nAs a comparison, we compare the task-specific part of our prompt with that of another famous paper LLM+P [1], which translates PDDL natural language(NL) descriptions into PDDL problem files with LLMs, and solves them with a PDDL solver. **We need **[$\\\\color{#2986cc}{\\\\textsf{\\\\textbf{NL task descriptions}}}$ + $\\\\color{#38761d}{\\\\textsf{\\\\textbf{query}}}$]**, where LLM+P needs [PDDL task description + 1 task-specific NL-> PDDL translation example + query].** \\n\\n* We summarize the task-specific part of their prompts here:\\n * To solve a problem, it first takes a task-specific NL->PDDL translation example:\\n \\n \\u200b\\u200bAn example planning problem is:\\n You have 5 blocks. \\n b2 is on top of b5. \\n b5 is on top of b1. \\n b1 is on top of b4. \\n b3 is on top of b2. \\n b4 is on the table. \\n b3 is clear. \\n Your arm is empty. \\n Your goal is to move the blocks. \\n b4 should be on top of b3.\", \"the_problem_pddl_file_to_this_problem_is\": \"(define (problem BW-rand-5)\\n (:domain blocksworld-4ops)\\n (:objects b1 b2 b3 b4 b5 )\\n (:init\\n (arm-empty)\\n (on b1 b4)\\n (on b2 b5)\\n (on b3 b2)\\n (on-table b4)\\n (on b5 b1)\\n (clear b3)\\n )\\n (:goal\\n (and\\n (on b4 b3))\\n )\\n )\\n\\n * Then it provides the NL description of the query it wants to solve and prompts LLMs to translate it to a PDDL problem file.\\n\\n You have 3 blocks. \\n b2 is on top of b3. \\n b3 is on top of b1. \\n b1 is on the table. \\n b2 is clear. \\n Your arm is empty. \\n Your goal is to move the blocks. \\n b2 should be on top of b3. \\n b3 should be on top of b1. \\n\\n * After translation, it provide this translated problem file and a **required PDDL domain file** into the solver:\\n\\n (define (domain blocksworld-4ops)\\n (:requirements :strips)\\n (:predicates (clear ?x)\\n (on-table ?x)\\n (arm-empty)\\n (holding ?x)\\n (on ?x ?y))\\n\\n (:action pickup\\n :parameters (?ob)\\n :precondition (and (clear ?ob) (on-table ?ob) (arm-empty))\\n :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) \\n (not (arm-empty))))\\n\\n (:action putdown\\n :parameters (?ob)\\n :precondition (holding ?ob)\\n :effect (and (clear ?ob) (arm-empty) (on-table ?ob) \\n (not (holding ?ob))))\\n\\n (:action stack\\n :parameters (?ob ?underob)\\n :precondition (and (clear ?underob) (holding ?ob))\\n :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob)\\n (not (clear ?underob)) (not (holding ?ob))))\\n\\n (:action unstack\\n :parameters (?ob ?underob)\\n :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty))\\n :effect (and (holding ?ob) (clear ?underob)\\n (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))))\\n\\n* Our prompts are mostly natural language descriptions of the task, but LLM+P needs very structured prompts including task-specific PDDL problem file translation examples and the PDDL domain files. The files needed in LLM+P need to be strictly correct because they are input to the PDDL solver. One format/syntax error would result in failures in calling the solver. **The need of task-specific example and PDDL formatted files are more strict and hard to obtain from a non-expert.**\\n\\n* As a comparison, NL task description is **more flexible**. We provide experiments below to show that **paraphrased natural language task descriptions will not change the performance of LLMFP**.\\n\\n* In addition to LLM+P, other methods that try to solve this problem also require **even more** task-specific in-context examples and existing PDDL domain files: see Figure 2 of [2] (**>500 lines of task-specific examples** for Blocksworld, vs. LLMFP only needs 10 sentences of task description) and Figure 1 of [3]. They all clearly need more prompt engineering effort.\"}", "{\"title\": \"Response to Reviewer FxuD - Round 2 1/4\", \"comment\": \"Thank the reviewer for your reply! We are glad that you appreciate the theoretical insights. We revised the paper to include the analysis and the additional experiment showing LLMs cannot directly solve optimization problems in Appendix A.5.4. Currently, it is a little long to fit in the main text, but we would love to move the analysis to the main texts when we put together the final version.\\n\\nWe believe the reviewer has misunderstandings about LLMFP being a general framework. First, we want to clarify the roles of different components in LLMFP. Then, we use a concrete example to demonstrate that **(a) we do not need new prompt engineering for each section and domain, (b) task-specific prompts for each task have very low engineering efforts, (c) LLMFP has the lowest prompting engineering effort compared to other approaches, and (d) LLMFP has high user prompt robustness and can handle a variety of different prompts from different users.**\\n\\n**PART (a): Our prompt structure**\\\\\\nIn LLMFP, we have the following prompt components:\\n\\n1. $\\\\color{#E24A33}{\\\\textsf{\\\\textit{LLMFP key components}}}$: These prompts are the key part of LLMFP to understanding how to formulate planning problems as optimization problems. These prompts are **task-agnostic** and are embedded in LLMFP. They are the same files that **do not need to be modified at all** when solving different task domains. These include 5 components, Definer, Formulator, Code Generator, Result Formatter, Self assess & Modification, and corresponding to the prompts on Pages 43-52 in the paper. We have **only one prompt for each component**.\\\\\\nWe call these prompts \\u2018$\\\\color{#E24A33}{\\\\textsf{prompt skeleton}}$\\u2019 in the clarifications below. \\n\\n2. $\\\\color{#2986cc}{\\\\textsf{\\\\textit{Task description input}}}$: This gives a basic setup of the planning problem given by the user, for example, it includes the objects, actions, preconditions and effects of actions. They only need to be accurate since this is all the information LLMFP knows about the planning problem to be solved. In this paper, we considered 9 task domains. Therefore, there are 9 task description prompts. \\n\\n3. $\\\\color{#674ea7}{\\\\textsf{\\\\textit{API input}}}$: This gives a list of background information about the tasks as well as information on APIs that the planner can use. This is the same for all multi-step tasks. For multi-constraint tasks, this serves as a supplement to $\\\\color{#2986cc}{\\\\textsf{task descriptions}}$ (2), for example, specifying the exact number for shipping cost of coffee.\\n\\n4. $\\\\color{#38761d}{\\\\textsf{\\\\textit{Query input}}}$: This is the question raised by the user that (a) describes the initial and/or goal states, or (b) adds or modifies existing requirements of the tasks of a particular task. Each task domain can have 21-602 queries that we use to evaluate the success rate. \\n\\nThe prompts for 2,3,4 are task domain-specific, and are on Pages 26-33 in the paper. \\\\\\nThey contain information about the task and user requirements, and are the basics for any planning framework or solver. \\n\\nFor each planning instance, the prompt follows the [$\\\\color{#E24A33}{\\\\textsf{prompt skeleton}}$ (1) + $\\\\color{#2986cc}{\\\\textsf{task descriptions}}$ (2) +$\\\\color{#674ea7}{\\\\textsf{APIs}}$ (3) + $\\\\color{#38761d}{\\\\textsf{query}}$ (4)] pattern. Again $\\\\color{#E24A33}{\\\\textsf{prompt skeleton}}$ (1) is the same everywhere.\"}", "{\"summary\": \"The paper presents a framework that pairs LLMs with optimization tools to solve planning tasks without using task-specific knowledge. The authors define consecutive stages of reasoning that, generally speaking, consist of understanding, coding, and refining. For each stage, they discuss the prompting, formatting, and other relevant decisions. Through experimental validation, they show that LLMFP outperforms baselines in 9 domains.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I like the general idea and the presented approach. One could argue that it is simply a combination of prompt engineering and the incorporation of external tools. However, showing an effective way of doing this can be a significant contribution.\\n\\nThe baselines and ablations are well-chosen for evaluating the performance of LLMFP.\\n\\nThe paper is written very clearly, making it easy to read. The figures are well-chosen (particularly Figure 1), they are helpful in understanding the pipeline. I like the section structure and the focus on key takeaways when discussing experimental results. Most of my questions that arose while reading the text were addressed in later sections.\", \"weaknesses\": \"The goal stated in the introduction is \\\"Can we build a universal LLM-based planning system that can solve complex planning problems without task-specific efforts?\\\". However, my main concern is whether the tasks used for experiments are indeed complex planning problems. Specifically, the 5 multi-constraint problems resemble simply optimization problems rather than planning problems. Hence it's quite clear that adding an external optimizer to LLM would be much better than just using LLM. On the other hand, the multi-step problems seem to be rather simple and the main difficulty is to understand what we have to do rather than finding a good solution. Hence, I suggest adding at least one multi-step domain with high underlying complexity (e.g. Sokoban). If I missed something and some of your environments are actually NP-hard (or hard in any other reasonable sense), it should be remarked in the paper.\\n\\nSince the method you propose is clearly subject to a tradeoff between performance and computation time, there should be a discussion of that. What's the wall time of LLMFP compared to the baselines? What's the cost of using SMT compared to querying LLM?\\n\\nThe description of baselines should be extended a bit. Are they prompted vanilla models plus the components described in lines 410-416, or do they also include other components, e.g. formatter? Also, the Code variant uses pure Python, but for a completely fair comparison you should also add variant which is forced to use SMT like LLMFP does. After reading the prompts used, it's also not clear to me whether they are explicitly instructed to provide optimal solutions, which is captured by the metrics. Also, I suggest discussing the failure modes of the baselines (in the Appendix).\", \"questions\": \"1. What are the most common failure modes of the baselines?\\n\\n2. Are the baselines prompted vanilla models plus the components described in lines 410-416, or do they also include other components, e.g. formatter?\\n\\n3. What are the success rates of the tested methods? Do they all achieve 100% and the question is only whether the solution is optimal, or do some methods fail to solve some instances at all?\\n\\n4. What's the wall time of LLMFP compared to the baselines?\\n\\n5. Are the methods explicitly instructed to provide optimal solutions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"I have no concerns.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 8M6X 1/2\", \"comment\": \"We thank the reviewer for comments and suggestions! We provide some clarifications and discussions regarding the weakness and questions proposed by the reviewer, and experiments suggested by other reviewers (Please see the Revision Summary). We also updated a revised draft and colored the modifications/additional discussions with blue.\\n\\n**8M6X-Q1: Baselines of LLMFP**\", \"8m6x_a1\": \"The baselines for comparisons in our paper are **fair comparisons** in that they have the **same inputs** and are all **zero-shot methods with no task-specific examples**. To the best of our knowledge, LLMFP is the first general-purpose planning framework that requires no task specific examples or external critics. We were not able to locate other frameworks/architectures that provide such a fair comparison to LLMFP at the time of submitting the paper.\\n\\nRegarding the three papers -\\n\\n* We have added the three papers mentioned by the reviewer to the **Related Works** section as they are relevant LLMs for Planning papers.\\n* However, these papers are **not comparable** to our framework LLMFP. This is because we focus on the setting where we do not have task-specific examples, external humans/verifiers/critics, whereas [1-3] require either domain-specific examples or feedback from real humans. Specifically, [1] requires task-specific in-context examples, [2] requires a Memory that is filled with 25 solved problems, and [3] is interactive and user-guided, which needs a real human person capable of validating the code. \\n* We noticed that [1] recently submitted another version to ArXiv after the ICLR submission deadline, which added a new experiment that investigated a zero-shot version of their method, MAP, on one multi-step task, Mystery Blocksworld, and showed that MAP could solve **8.2%** of the problems. Meanwhile, LLMFP could achieve **77.7%** and **98.0%** optimal rates for GPT-4o and Claude 3.5 Sonnet. The comparison of the performance results further proves the effectiveness of our paper, which **significantly outperformed the zero-shot MAP.**\\n\\n**8M6X-Q2: Are LLMs hallucinate or overuse APIs?**\", \"8m6x_a2\": \"We do observe some hallucinations and misuse of the APIs. For example, \\n* We provide a Max(variable_list) function to LLMs for the task allocation task. Although the input is a list of variables, LLMs sometimes input multiple variables separately and use it like Max(variable_1, variable_2,...). This phenomenon would result in runtime errors and can be corrected during later iterations. \\n* As we discussed in Appendix A.4.5, LLMFP Failure Case Analysis, the major failure case for Warehouse is: Code Generator overwrites the provided API get_distance and provides 1 as the output. Thus, the distance between each station is mistakenly set to be 1. Although we do not provide a fix for this issue in our paper, it is easy to locate and correct this type of issue by adding a checker to check for repeated initialization of functions. \\n\\nDespite these hallucinations and misuse of APIs, they are sporadic and LLMFP still significantly outperforms all baselines.\"}", "{\"title\": \"Response to Reviewer dysu 1/6\", \"comment\": \"We thank the reviewer for the constructive comments and helpful feedback! We are pleased to see that the reviewer appreciates the idea, approach, and presentation of LLMFP. You brought up some great questions and suggestions which have helped improve our work. We have added more experiments and discussions to address the concerns of the reviewer. We also updated a revised draft and colored the modifications/additional results and discussions with blue.\", \"to_summarize\": [\"We added experiments on **Sokoban** tasks for 15 queries\", \"LLMFP achieves optimal rates of **80%**, outperforming all baselines\", \"We added a baseline **Code_SMT** that is forced to use SMT for code generation for all tasks and LLMs\", \"LLMFP outperforms Code_SMT, which achieves an average of 2.7% and 62.4% for multi-constraint tasks, and 1.0% and 0.0% for multi-step tasks across two LLMs GPT-4o and Claude 3.5 Sonnet\", \"We provide results with **success rate** as the metric across all 9 tasks and for both LLMs\", \"LLMFP still outperforms the baselines\", \"We added experiments that **explicitly** instruct baselines to output **optimal** solutions for multi-step problems\", \"LLMFP still outperforms the baselines\", \"We added wall time comparison and cost comparison\", \"LLMFP runtime and cost are reasonable, comparable to using o1-preview\", \"We added complexity analysis and failure cases analysis\", \"All multi-constraint tasks, Blocksworld, and Mystery Blocksworld are proved to be NP-Hard\"], \"detailed_response\": \"**dysu-Q1: The complexity of planning problems**\", \"dysu_a1\": [\"We thank the reviewer for bringing up the discussion about the complexity of our planning problems. We include a more detailed complexity discussion and put it in Appendix A.1. In short, many of the tasks we experimented on are NP-hard problems. In particular -\", \"For 5 multi-constraint problems, they are **all NP-hard problems.** Specifically,\", \"We use the benchmark from [1] for the first 3 problems (Coffee, Workforce, and Facility), in which these 3 problems are built as Mixed-integer linear programming (MILP) problems. Please refer to Appendix A.1 for the formal optimization problem definition example. As MILP is known to be NP-hard, the first 3 problems are NP-hard.\", \"For the Task Allocation problem, since it is equivalent to a multi-agent traveling salesman problem(agent=robots, tasks=cities), it reduces from the classic traveling salesman problem (TSP) and thus is also NP-hard.\", \"For the Warehouse problem, as TSP is a special case when one station can be used to finish one specific task, and there are no extra stations, the Warehouse problem is at least as complex as TSP and, thus, is also NP-hard.\", \"For multi-step problems, Blocksworld has proved to be an NP-hard problem [2], and the same is true for Mystery Blocksworld, as it is the same problem with obfuscated names. Although there is no existing proof to be NP-hard problems, Movie has 13 predicates and 9 possible actions, and Gripper has 4 types of objects (rooms, objects, robots, grippers), 4 predicates, and 3 possible actions. These show that they are not simple straightforward tasks.\", \"**dysu-Q2: More complex multi-step problem - Sokoban**\"], \"dysu_a2\": \"As the reviewer suggested, we tested LLMFP on the Sokoban environment. Due to the time limitations, we created a pilot evaluation set containing 15 queries describing the game setup and goals with different map sizes and number of boxes. We have five queries with 5x5 maps and 1 box, five queries with 6x6 maps and 1 box, and five queries with 5x5 maps and 2 boxes. The evaluation results are presented in the following table.\", \"table_1\": \"Optimal rate (%) comparison of LLMFP with baselines on Sokoban problem\\n| Direct_GPT-4o | Direct_o1-preview | CoT_GPT-4o | Code_GPT-4o | Code_SMT_GPT-4o | LLMFP_GPT-4o |\\n|--------------------------|----------------------------|-----------------------|-----------------------|---------------------------|-------------------------|\\n| 0.0 | 26.7 | 0.0 | 0.0 | 0.0 | **80.0** |\\n\\n\\nAs can be observed, LLMFP achieves a success rate of 80%, outperforming the baselines. The new results, along with other problems, showcase the potential of LLMFP to solve complex problems. We will keep working on expanding our query set and will add the full results to our paper.\\n\\nWe added this result in Appendix A.1 LLMFP Performance on Sokoban of the revised paper, as well as the discussion of failure mode. Please refer to the revision page 16 for detailed discussions.\"}", "{\"title\": \"Response to Reviewer FxuD - Round 3 -Followup\", \"comment\": \"Dear reviewer FxuD,\\n\\nWe would like to follow up on our pending discussion about the \\u2018overhead\\u2019 of our proposed method.\\n\\nIn our last response, we show LLMFP does not necessarily need stronger models by testing it with GPT-4 for two multi-step tasks, Blocksworld and Gripper. We would like to provide some more experiments to support this claim. We performed additional experiments to test it with GPT-4 on three more multi-constraint tasks, Workforce, Task Allocation, and Warehouse. We tested 10 queries from each domain and got an optimal rate of 80%, 70% and 70% respectively. \\n\\nWe would like to confirm whether our responses have effectively addressed your concerns from the last and follow-up discussions. If there are any additional points you'd like us to discuss or consider, please do not hesitate to let us know. As the discussion period ends on Dec 2, we look forward to timely discussions with you!\\n\\nBest,\\\\\\nAuthors of paper 6029\"}", "{\"comment\": \"Dear reviewer dysu,\\n \\nThank you for your reply! We are glad to hear that our response addresses your concerns and that you recommend accepting the paper. Since your score is [borderline accept], would you please let us know what concerns are holding you back from further raising the score to [accept]? We are more than happy to address them!\\n\\nBest,\\\\\\nAuthors of paper 6029\"}", "{\"title\": \"Response to Reviewer dysu 6/6\", \"comment\": \"**dysu-Q7: Success rates of tested methods? Do they all achieve 100%?**\", \"dysu_a7\": \"We thank the reviewer for the suggestion to include success rates as another metric for evaluation. We include the success rates of all methods in the tables below and also in Appendix A.3. \\n\\nNote that although the optimization goal is described in the task description for multi-constraint problems, we exclude the optimization goal when calculating the success rate and only evaluate whether the plan fulfills the task setup and the query. This would largely decrease the difficulty of multi-constraint problems for baseline methods. For example, even assigning all tasks to one robot is considered a success for the task allocation task. Thus, the success rates of all baselines for multi-constraint problems are significantly higher than the optimal rates. However, the success rates of LLMFP almost remain the same as optimal rates since the SMT solver guarantees to output the optimal result with the correct encoding. **Even when compared in such an unfair way, the performance of LLMFP still outperforms other baselines, with an average of 86.4%, 18.1% higher than the best baseline.** \\n\\nWhile for the multi-step problems, considering all initial conditions, predicate and action definitions, and goals are the same, developing a reasonable and correct plan is not significantly easier than developing an optimal plan with the least number of steps. Thus, the success rates of baselines are improved, but not significantly, compared to the optimal rates.\", \"table_8\": \"Success rate (%) comparison of LLMFP with baselines on 5 multi-constraint problems.\\n\\n| Method | Coffee | Workforce | Facility | Task Allocation | Warehouse | Average |\\n|-----------------------------|--------|-----------|----------|------------------|-----------|---------|\\n| Direct_GPT-4o | 5.6 | 54.5 | 31.7 | **100.0** | 42.0 | 46.8 |\\n| Direct_o1-preview | 26.3 | **92.6** | 41.5 | 94.0 | 86.0 | 68.1 |\\n| CoT_GPT-4o | 17.7 | 72.3 | 31.7 | **100.0** | 82.0 | 60.7 |\\n| Code_GPT-4o | 18.8 | 76.2 | 64.6 | 92.0 | 90.0 | 68.3 |\\n| Code_SMT-GPT-4o | 0.0 | 10.8 | 1.2 | 0.0 | 34.0 | 9.2 |\\n| LLMFP_GPT-4o | **64.7** | 92.2 | **79.3** | **100.0** | **96.0** | **86.4** |\\n||\\n| Direct_Claude 3.5 (Sonnet) | 5.3 | **91.3** | 36.0 | **100.0** | 76.0 | 61.7 |\\n| CoT_Claude 3.5 (Sonnet) | 10.9 | 60.6 | 1.2 | **100.0** | 96.0 | 53.7 |\\n| Code_Claude 3.5 (Sonnet) | 61.3 | 89.2 | 59.1 | **100.0** | 60.0 | 73.9 |\\n| Code_SMT-Claude 3.5 (Sonnet)| 77.1 | 39.0 | 59.1 | 90.0 | 74.0 | 67.8 |\\n| LLMFP_Claude 3.5 (Sonnet) | **80.5** | 88.7 | **61.6** | **100.0** | **92.0** | **84.6** |\\n\\n---\", \"table_9\": \"Success rate (%) comparison of LLMFP with baselines on 4 multi-step problems.\\n| Method | Blocksworld | Mystery Blocksworld | Movie | Gripper | Average |\\n|-----------------------------|-------------|----------------------|-------|---------|---------|\\n| Direct_GPT-4o | 56.1 | 1.0 | 90.5 | 16.0 | 40.9 |\\n| Direct_o1-preview | 90.9 | 37.9 | 100.0 | **76.0** | 76.2 |\\n| CoT_GPT-4o | 62.0 | 3.0 | **95.2** | 10.0 | 42.5 |\\n| Code_GPT-4o | 0.0 | 0.3 | 0.0 | 0.0 | 0.1 |\\n| Code_SMT-GPT-4o | 0.2 | 0.0 | 0.0 | 4.0 | 1.0 |\\n| LLMFP_GPT-4o | **96.2** | **77.7** | **100.0** | **76.0** | **87.5** |\\n||\\n| Direct_Claude 3.5 Sonnet | 54.5 | 0.5 | **100.0** | 56.0 | 52.7 |\\n| CoT_Claude 3.5 Sonnet | 76.1 | 3.2 | **100.0** | 72.0 | 62.8 |\\n| Code_Claude 3.5 Sonnet | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| Code_SMT-Claude 3.5 Sonnet| 0.0 | 0.0 | 4.0 | 0.0 | 1.0 |\\n| LLMFP_Claude 3.5 Sonnet | **93.4** | **98.0** | **100.0** | **76.0** | **91.8** |\\n\\n[1] Li, Beibin, et al. \\\"Large language models for supply chain optimization.\\\" arXiv preprint arXiv:2307.03875 (2023).\\n\\n[2] Gupta, Naresh, and Dana S. Nau. \\\"On the complexity of blocks-world planning.\\\" Artificial intelligence 56.2-3 (1992): 223-254.\"}", "{\"title\": \"Response to Reviewer qLBs 1/4\", \"comment\": \"We thank the reviewer for the constructive comments and helpful feedback! You brought up some great questions and suggestions, which have helped improve our work. **We have added additional experiments to show that the PDDL-based approach cannot solve some planning problems we consider in this paper**, and experiments suggested by other reviewers (Please see the Revision Summary). We address the reviewer's concerns below. We also updated a revised draft and colored the modifications/additional results and discussions with blue.\\n\\n**qLBs-Q1: Novelty of the proposed work over the PDDL-based approach**\", \"qlbs_a1\": \"Our framework is **the first of its kind to perform zero-shot planning from natural language description across various domains**. We argue that the existing approaches, including the PDDL-based approach, are not sufficient to solve the problem we consider in this paper. Specifically, \\n* **Existing PDDL-based approaches rely on task-specific efforts.** Our LLMFP framework does not need any task-specific examples or human inputs. However, current PDDL-based approaches depend either on task-specific in-context examples[1,3,4], existing PDDL domain files[1,5], or human corrections[2]. These all limit their cross-task generalization capabilities. \\n* **PDDL-based approaches cannot handle general planning problems.** Even if the task-specific efforts in the existing PDDL-based approaches can be omitted in the future version of LLMs, the fact that PDDL planners need domain and problem files limits their applicability to non-PDDL problems that we consider in this paper. For example, the multi-constraint problems, such as the Coffee task in our paper, cannot be solved using PDDL-based planners in a zero-shot way. Specifically, the Coffee task\\u2019s goal is to develop plans that satisfy retail location demands considering supplier capacities with minimized total cost. The definition of actions, as needed in the PDDL domain file, is not clear and straightforward. We slightly modified the prompts from [1], a work that uses LLMs to generate PDDL domain files, to generate actions and their preconditions and effects for the Coffee task, given only the description of the natural language task (same setting as LLMFP). The method in [1] cannot generate reasonable action definitions. One representative failure mode is:\\n * Action 1: Source Beans from Supplier\\n * Action: This action enables the company to source a unit of coffee beans from a supplier.\\n * Parameters:\\n 1. `?s - supplier`: the supplier from which to source beans\\n 2. `?f - facility`: the facility to which the beans are shipped\\n * Preconditions:\\n (and\\n (supplier-has-capacity ?s)\\n (facility-can-receive ?f))\\n * Effects:\\n (and\\n (not (supplier-has-capacity ?s))\\n (facility-has-beans ?f))\\n * Failure reason: This action sources only one unit of coffee beans, but the effects mean the supplier would not have any capacity after the action, which does not take the number of coffee beans capacity of suppliers into consideration. \\n\\n Other than this, actions are also not defined comprehensively. For example, from the output, we observe that only \\u201cShip Dark Coffee to Retail Location\\u201d is defined, but shipping light coffee is not considered. \\n\\n These results show the generation of PDDL domain files is not straightforward for non-PDDL problems, which limits PDDL-based approaches\\u2019 applicability. Even for PDDL problems, no existing PDDL-based approach can plan PDDL problems in a zero-shot way without PDDL domain files or human corrections. However, LLMFP is able to handle diverse sets of problems, including multi-constraint problems and multi-step problems without task-specific examples, as shown in Table 1 and 2 in the paper.\\n* LLMFP can use any solvers. Although we use SMT as the solver in this work, LLMFP can be adapted to any planner or solver by updating the requirements and representation format in the prompts. We already included an example in Appendix A.7.3 in the original paper, where prompts can be easily modified to support using MILP. We would love to extend our work to support more solvers or even multiple solvers at the same time. In fact, we believe that each solver has its own specialized types of problems. Our future research plan is to equip LLMs with various solvers and allow LLMs to select preferred solvers or planners automatically.\"}", "{\"title\": \"Message from authors -- last day of draft revision\", \"comment\": \"Dear reviewer 8M6X,\\n\\nWe sincerely appreciate your time and efforts in evaluating our paper. Since this is the last day we can revise the draft, we would like to confirm whether our responses have effectively addressed your concerns. To summarize, in our response, \\n* We added the citations for the three mentioned papers to the Related Work section\\n* We discussed why they are not comparable to our method (although a work updated its draft with one zero-shot experiment after the ICLR submission deadline, LLMFP significantly outperforms it on this task)\\n* We answered the questions about planning problem definition, LLM hallucination, and reasons for not directly using a symbolic solver\\n\\nWe also included other important updates and additional experiments we have done in the Revision Summary. If there are any additional points you'd like us to discuss or consider, please do not hesitate to let us know. Your insights have been invaluable, and we're grateful for your feedback on our work. We look forward to further discussions with you!\\n\\nBest,\\\\\\nAuthors of paper 6029\"}", "{\"title\": \"Response to Reviewer dysu 3/6\", \"comment\": \"**dysu-Q4: Baseline descriptions and failure modes discussion**\", \"dysu_a4\": \"We thank the reviewer for the valuable suggestion! All baselines are equipped with a result formatter to convert their generated plans, in various forms, to a fixed format for better evaluation. We\\u2019ve added more baseline descriptions in the updated draft. We briefly described the major failure cases of baselines as captions of Example baseline outputs in Appendix A.8.1. To make the discussion more structured, we include more detailed failure mode discussions in Appendix A.5.\", \"here_we_provide_a_summary_and_some_example_failure_cases\": \"To summarize, Direct and CoT fail to solve multi-constraint problems because they do not have the capability to directly solve the optimal solution considering various constraints, intensive calculations, and numerous possible solutions, and they fail to solve multi-step problems because they cannot consider preconditions and effects of all actions accurately. For example, given a gripper task with one robot thus two grippers, CoT generates solution as: *[\\\"**pick** ball3 robot1 room3 left_gripper\\\", \\\"**pick** ball5 robot1 room3 right_gripper\\\", \\\"move robot1 room3 room4\\\", \\\"**drop** ball5 robot1 room4 right_gripper\\\", \\\"**pick** ball1 robot1 room4 right_gripper\\\", \\\"**pick** ball4 robot1 room4 left_gripper\\\"...]* which attempts to pick up three things with two grippers. This is due to the misconsideration of action preconditions.\\n\\nWhile Code and Code_SMT fail to solve the problems often due to the failure to encode the correct problem setup, query, and constraint/actions or failure to write codes with correct logic or syntax. For example, Code_SMT often fails to distinguish the difference between And and Implies, and Code sometimes ignores the task description \\u201cthe finish time counts the time when the last robot stops working\\u201d for task_allocation task.\"}", "{\"comment\": \"Thank you once again for your extensive responses. My questions have been resolved. I will maintain my rating and recommend accepting the paper.\"}", "{\"title\": \"Message from authors -- last day of draft revision\", \"comment\": \"Dear reviewer dysu,\\n\\nWe sincerely appreciate your time and efforts in evaluating our paper. Since this is the last day we can revise the draft, we would like to confirm whether our responses have effectively addressed your concerns. To summarize, in our latest response, we updated the complexity analysis of tasks and answered further questions regarding query and Sokoban runtime.\\n\\nIf there are any additional points you'd like us to discuss or consider, please do not hesitate to let us know. Your insights have been invaluable, and we're grateful for your feedback on our work. We look forward to further discussions with you!\\n\\nBest,\\\\\\nAuthors of paper 6029\"}", "{\"summary\": \"LLMFP is proposed which leverages LLMs to tackle complex planning problems by formulating them as optimization tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"LLMFP's ability to handle a wide variety of planning problems without task-specific examples is a significant strength.\", \"weaknesses\": \"1. The baselines for comparison do not seem to be a fair comparison to LLMFP. See questions.\\n2. The related work does not cover relevant set of papers that should have been used a baseline to compare this work. Mentioning a few of them below - \\n[1] Webb, T., Mondal, S. S., Wang, C., Krabach, B., & Momennejad, I. (2023). A Prefrontal Cortex-inspired Architecture for Planning in Large Language Models. arXiv preprint arXiv:2310.00194.\\n[2] Fabiano, F., Pallagani, V., Ganapini, M. B., Horesh, L., Loreggia, A., Murugesan, K., ... & Srivastava, B. (2023, December). Plan-SOFAI: A Neuro-Symbolic Planning Architecture. In Neuro-Symbolic Learning and Reasoning in the era of Large Language Models.\\n[3] Katz, M., Kokel, H., Srinivas, K., & Sohrabi, S. (2024). Thought of Search: Planning with Language Models Through The Lens of Efficiency. In The First Workshop on System-2 Reasoning at Scale, NeurIPS'24.\", \"questions\": \"1. What is the definition of a planning problem in this paper?\\n2. Why are the baselines only LLMs when the proposed approach is a framework/architecture? LLM-PFC [1] approaches planning problems similarly and there are other baselines to consider like Plan-SOFAI [2].\\n3. LLMs when used with API's are found to hallucinate new API functions or overuse a specific API call. Is such behavior observed here?\\n4. When it is a planning problem, why not directly use a symbolic planner and why is this architecture beneficial?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer dysu 2/6\", \"comment\": \"**dysu-Q3: Time and cost comparison of baselines and LLMFP**\", \"dysu_a3\": \"In Appendix A.4 we included the time and cost analysis for LLMFP. As the reviewer suggested, we add time and cost comparisons for all methods. \\n\\nThe following tables show the wall time comparison of all methods for GPT-4o on 9 tasks (also in Appendix A.4). From the results, we observe that the time taken for LLMFP, although longer than most of the baselines, is within a reasonable range. Especially for multi-constraint problems, it is shorter than Direct with o1-preview because of the inherent difficulty for LLMs to solve these combinatorial optimization problems.\", \"table_2\": \"Average wall time (s) per query comparison for 5 multi-constraint problems with GPT-4o.\\n| Method | Coffee | Workforce | Facility | Task Allocation | Warehouse | Average |\\n|-----------------------|--------|-----------|----------|-----------------|-----------|---------|\\n| Direct_GPT-4o | 8.8 | 2.2 | 2.1 | 1.8 | 0.9 | 3.2 |\\n| Direct_o1-preview | 104.2 | 63.9 | 77.7 | 70.5 | 63.7 | 76.0 |\\n| CoT_GPT-4o | 16.9 | 12.0 | 6.0 | 9.6 | 7.4 | 10.4 |\\n| Code_GPT-4o | 30.6 | 10.0 | 8.2 | 5.7 | 7.1 | 12.3 |\\n| Code_SMT GPT-4o | 30.0 | 15.3 | 10.3 | 15.0 | 8.3 | 15.8 |\\n| LLMFP_GPT-4o | 87.1 | 55.1 | 29.9 | 62.3 | 28.9 | 52.7 |\", \"table_3\": \"Average wall time (s) per query comparison for 4 multi-step problems with GPT-4o.\\n| Method | Blocksworld | Mystery Blocksworld | Movie | Gripper | Average |\\n|-----------------------|-------------|----------------------|-------|---------|---------|\\n| Direct_GPT-4o | 0.7 | 0.7 | 0.5 | 8.8 | 2.7 |\\n| Direct_o1-preview | 26.3 | 87.9 | 25.7 | 23.8 | 40.9 |\\n| CoT_GPT-4o | 2.1 | 4.0 | 1.0 | 10.2 | 4.3 |\\n| Code_GPT-4o | 19.7 | 8.9 | 7.3 | 8.2 | 11.0 |\\n| Code_SMT GPT-4o | 9.1 | 8.5 | 10.6 | 12.9 | 10.3 |\\n| LLMFP_GPT-4o | 43.3 | 48.3 | 58.6 | 141.6 | 73.0 |\\n\\nThe table below shows the average cost comparison of all methods on the coffee task. We observe that although LLMFP is more costly than most of the baselines, it is cheaper than Direct with o1-preview with better performance. In addition, the average cost per query for all 9 tasks is around 0.1 dollars, indicating LLMFP is not very costly.\", \"table_4\": \"Average cost ($) per query comparison of LLMFP on the Coffee task.\\n| Direct_GPT-4o | Direct_o1-preview | CoT_GPT-4o | Code_GPT-4o | Code_SMT_GPT-4o | LLMFP_GPT-4o |\\n|--------------------------|----------------------------|-----------------------|-----------------------|---------------------------|-------------------------|\\n| 0.008 | 0.536 | 0.013 | 0.023 | 0.024 | 0.139 |\"}", "{\"title\": \"Message from authors -- last day of draft revision\", \"comment\": \"Dear reviewer FxuD,\\n\\nWe sincerely appreciate your time and efforts in evaluating our paper. Since this is the last day we can revise the draft, we would like to confirm whether our responses have effectively addressed your concerns. To summarize, in our latest response, we clarified in detail why LLMFP is a generic framework and added experiments to show that LLMFP is not sensitive to specific wordings of task descriptions.\\n\\nIf there are any additional points you'd like us to discuss or consider, please do not hesitate to let us know. Your insights have been invaluable, and we're grateful for your feedback on our work. We look forward to further discussions with you!\\n\\nBest,\\\\\\nAuthors of paper 6029\"}", "{\"title\": \"Response to Reviewer FxuD 1/3\", \"comment\": \"We thank the reviewer for the valuable comments and suggestions! We provide some clarifications, discussions, and experiments regarding the weakness and questions proposed by the reviewer, and experiments suggested by other reviewers (Please see the Revision Summary). We also updated a revised draft and colored the modifications/additional discussions with blue.\\n\\n**FxuD-Q1: Complexity in FORMULATOR**\", \"fxud_a1\": \"Thanks for the suggestion. We have rewritten the FORMULATOR section. In particular, \\n* We rewrote the first two paragraphs with a clearer way of introducing the JSON representation and field.\\n* We cut down the paragraph Single-Step Multi-Constraint Problem to make it less dense for the readers.\\n\\nPlease refer to our updated draft. All the changes are highlighted in blue.\\n\\n**FxuD-Q2: Unclear presentation in \\u2018Multi-Step Planning Problem\\u2019 section**\", \"fxud_a2\": \"Thanks for the suggestion. We have modified the \\u2018Multi-Step Planning Problem\\u2019 section. In particular,\\n* We rewrote the start of the paragraph with a clearer explanation of what different stages mean, and disambiguated them from the fields.\\n* We replaced Figure 2 with two clear, to-the-point examples of the JSON representation. Figure 2 can now better assist the explanation.\\n\\nPlease refer to our updated draft. All the changes are highlighted in blue.\\n\\n**FxuD-Q3: The statement \\u2018Our approach does not require task-specific examples or task-specific efforts\\u2019 is not supported**\", \"fxud_a3\": \"We would like to clarify that the only task-specific information is the user description of the planning problem and the user\\u2019s question (i.e., lines 188-196). This is the minimum information needed to define and understand the planning problem.\\n\\nOther than that, all the prompts for different modules are task-agnostic, i.e. the same prompt works for all planning problems. To emphasize this, we added sentences like \\u2018which is task-agnostic\\u2019 or \\u2018which is invariant across tasks\\u2019. It is worth emphasizing that although our prompts include in-context examples, those examples are from a fixed task that is unrelated to the target task. In our implementation, we used examples from travel salesman, object selection, and logistics problems, which do not belong to any planning task we evaluate. Please refer to the detailed prompts listed in A.10. Those are the prompts that are fixed for all the different planning tasks.\"}", "{\"summary\": \"This paper provides the LLM prompting-based framework for planning tasks. The main contribution lies in its use of prompt and pipeline templates, which can be used across various planning tasks. Here, planning problems from various domains were considered, and planning is treated as an optimization problem. In summary, LLMs are used as an optimizer. They used the formal planner to achieve the goal of planning as already shown in the previous works that LLM still lacks the coherent reasoning needed for planning. The main contribution is an end framework that deploys a zero-shot learning approach for both single and multi-stage planning tasks. Additionally, the author claims that their framework can handle self-critique to assess the problem in planning code to change and achieve the goal. Effectiveness of framework components is supported via the ablations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. LLMFP introduces a new perspective on using LLMs for formal optimization-based planning, a method that significantly expands the generalizability of planning tasks.\\n2. Experimental results are solid, with clear evidence of performance gains across diverse tasks and models. The ablation studies reinforce the utility of the framework components, which I really liked.\\n3. The ability to solve multi-step, multi-constraint problems without task-specific examples or extensive prior efforts is a major step forward in the area of LLM-based planning.\", \"weaknesses\": \"1. Complexity in FORMULATOR: Some parts, particularly the JSON representation and the code generation steps, could be simplified. While important, the handling of different variable types and constraints might be a bit dense for readers unfamiliar with optimization theory.\\n2. Regrading Multi-Step Planning Problem: The predicate, object, and update structure are not clear in multi-step planning. Also image shown for this is not utilized in conveying the idea. Overall, Figure. 2 examples are not clear and make things confusing. \\n3. The author claims that their framework is a \\\"general approach, which does not require task-specific examples or task-specific efforts\\\"; however, in the paper, this statement is not supported in terms of explanations and prompt structure.\\n4. Some theoretical insights regarding performance would make this work more strong, right now its presented more like a experimental results.\", \"questions\": \"1. How LLMFP handles generalization across different planning tasks. Please correct me, but it seems that we need a very Elaborate prompt with a high level of detail for each task.\\n2. In section 3.4 (code generator), readers can Benefit from prior work such as \\\"CAPE: Corrective Actions from Precondition Errors using Large Language Models\\\" and \\\"CoT-TL: Temporal Knowledge Representation of Natural Language Planning Task for Autonomous Agents using Chain-Of-Thought.\\\" Or is LLMFP doing differently compared to the above works; if yes, explain or not, and make sure to provide proper background.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Authors Round 2\", \"comment\": \"Thank you to the authors for providing detailed answers and comparisons regarding LLM+P. I concur with the authors that LLM+P requires more domain-specific prompt engineering compared to this work. However, the current work introduces additional overhead compared to LLM+P. For instance, on the Gripper dataset, LLM+P achieves quite good results in terms of success rate with GPT-4 instead of GPT-4o. I appreciate the robustness of the prompting system with respect to paraphrasing. This work will be a valuable addition to the robotics community. I will maintain my current rating.\"}", "{\"title\": \"Response to Reviewer FxuD 3/3\", \"comment\": \"**FxuD-Q5: How LLMFP handles generalization across different planning tasks? It seems that we need a very Elaborate prompt with a high level of detail for each task.**\", \"fxud_a5\": \"As we have clarified in FxuD-Q3, the only task-specific information needed is the user description of the task, the background information, and the user questions. The user description of the task indeed needs to be elaborate and accurate, **because this is the only source of information about the planning task. Otherwise, the planning problem will be ill-defined.** Our method only requires the minimum necessary task-specific information, compared with existing works that require additional human efforts such as providing task-specific examples, human-designed rules or critics, or even action-wise human feedback.\\n\\nOther than that, all the prompts to the LLMs, despite being elaborate and detailed, are **task-agnostic**. The reason why LLMFP can generalize across tasks is that it casts the planning problem into a constrained optimization problem, the processing of which is generic and task-independent.\\n\\n**FxuD-Q6: Prior works for section 3.4**\", \"fxud_a6\": \"We thank the reviewer for referring to two relevant papers in the field. CAPE provided self-corrections to generated plans, which is a good resource to include in section 2.1 (LLMs for Planning). CoT-TL focuses on translating natural language specifications into LTL representations and very briefly mentioned they could generate codes to solve translated representations with Gurobi solver, which is a good resource to include in section 2.2 (LLM+Solver). To provide more background, we included these papers in section 2 of the revised version.\\n\\n\\nHowever, we believe LLMFP is doing differently compared to the above works for code generation itself. Our understanding is that CAPE focuses on self-corrections with error feedback, which does not include discussions of code generation. Although CoT-TL briefly mentioned that they leveraged LLM to encode the LTL formula as mixed-integer linear constraints for a demo in section IV.D, their focus is the translation process and did not provide detailed discussions or examples describing how they accomplish code generation. Some other works mentioned in Section 2.2 also have the representation -> solver code part, but the code generation process of LLMFP is novel and different to these works for a major reasons: the code generation in LLMFP is completely zero-shot, with no in-context examples or task-specific examples, and completely based on the JSON representation created in Formulator. This JSON representation acts like a plan for code generation, as it includes every variable or step needed to encode this problem into codes and specifies what are the values, sources, specific information, etc., clearly. \\n\\n\\n\\n[1] Bachmann, Gregor, and Vaishnavh Nagarajan. \\\"The pitfalls of next-token prediction.\\\" arXiv preprint arXiv:2403.06963 (2024).\\n\\n[2] Bubeck, S\\u00e9bastien, et al. \\\"Sparks of artificial general intelligence: Early experiments with gpt-4.\\\" arXiv preprint arXiv:2303.12712 (2023).\\n\\n[3] LeCun, Yann. \\\"Do large language models need sensory grounding for meaning and understanding.\\\" Workshop on Philosophy of Deep Learning, NYU Center for Mind, Brain, and Consciousness and the Columbia Center for Science and Society. 2023.\\n\\n[4] Momennejad, Ida, et al. \\\"Evaluating cognitive maps and planning in large language models with CogEval.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[5] Valmeekam, Karthik, Matthew Marquez, and Subbarao Kambhampati. \\\"Can large language models really improve by self-critiquing their own plans?.\\\" arXiv preprint arXiv:2310.08118 (2023).\\n\\n[6] Valmeekam, Karthik, et al. \\\"Planbench: An extensible benchmark for evaluating large language models on planning and reasoning about change.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[7] Valmeekam, Karthik, et al. \\\"On the planning abilities of large language models-a critical investigation.\\\" Advances in Neural Information Processing Systems 36 (2023): 75993-76005.\"}", "{\"title\": \"Response to Reviewer FxuD - Round 3\", \"comment\": [\"Thank the reviewers for the reply! However, we respectfully disagree with the claim *\\u201cHowever, the current work introduces additional overhead compared to LLM+P. For instance, on the Gripper dataset, LLM+P achieves quite good results in terms of success rate with GPT-4 instead of GPT-4o\\u201d*. We would like to explain why this is inaccurate and should not be considered as a source of criticism of our work.\", \"**LLMFP does not necessarily need stronger models.** LLMFP can also perform well on GPT-4. To show this, we perform an experiment where we tested LLMFP on 10 queries of Gripper and 10 queries with Blocksworld with GPT-4. LLMFP could deliver optimal plans for 7 queries for Gripper and 8 queries for Blocksworld (yielding success rates of 70% and 80% respectively). We are happy to extend this experiment to a larger scale, if you believe this could improve our work.\", \"**LLM+P is a domain-specific solver, needs domain-specific examples, and can only solve PDDL problems.** In comparison, LLMFP is a general planner and does not need domain-specific examples. It is not equitable to directly compare the performance of LLM+P and LLMFP on one specific task domain.\", \"**LLM+P cannot solve non-PDDL problems**, such as the multi-constraint problems in our paper (details provided in response qLBs-A1). LLMFP can solve a broad spectrum of planning problems, including PDDL problems without domain-specific examples.\", \"LLM+P requires domain-specific effort **for every task**, but LLMFP is a generic approach.\", \"**LLMs only act as a translator in LLM+P, but LLMFP enables LLMs to understand the problem and build an optimization problem by themselves.**\", \"**GPT-4 is more expensive and slower than GPT-4o [1].** For LLM-based planning frameworks, runtime and cost are important factors. We do not think our contribution should be underestimated by using a cheaper and faster model.\", \"We hope the answers can address the reviewer\\u2019s concern. We would love to have further discussions with the reviewer!\", \"[1] https://openai.com/api/pricing/\"]}", "{\"comment\": \"### Q: what exactly means \\\"Average wall time (s) per query\\\" ?\", \"a\": \"As we use interactive deepening for multi-step problems, the solver starts from a small timestep to check satisfaction. If the solver finds a satisfiable solution for a given timestep, it is guaranteed to be the optimal solution; if the solver finds no solution given a timestep, it adds one more step and repeats the process.\\n\\nIn the sokoban problem since the plan length for some instances is long, especially with 2 boxes or on a 6x6 map, the average runtime per instance is **1358.3 seconds** (we set timestep to start from 5). However, for a length-23 plan, when timestep=23, the runtime to solve for the solution is only **240.7** seconds. \\n\\nAs we discussed in [Appendix A.1 LLMFP Performance on Sokoban], \\u201calthough LLMFP is demonstrated to be capable of correctly encoding and solving the Sokoban problem, it is true that there are many more variables in the Sokoban problem than in other tasks because the problem is represented with a map with a large number of different positions. This slows down the speed of the SMT solver. To mitigate this problem, some potential solutions include 1) introducing methods to estimate the lower and upper bounds of step numbers needed and start from there, 2) developing heuristics to prioritize some possible options first, and 3) developing methods that put attention on a part of the map and ignore the unnecessary positions in the map. We would love to extend our work to explore these directions to make our framework more efficient.\\u201d\\n\\nWe believe that the major contribution of our paper is to **allow the LLMs to be able to correctly encode the problem**. For multi-step problems like this, correctly encoding the initial conditions, goals, and actions with codes is the most difficult part. For larger maps, the difficulty of encoding using LLMFP is only slightly higher even if there are more object descriptions in the prompt. Although the solving runtime increases exponentially for larger maps, this is an inherent problem for the SMT solver. This can be potentially solved by supporting different solvers in LLMFP, which we will explore as future work.\"}", "{\"comment\": \"If a score of 7 were available, I would have selected it. However, given the current scale, I must choose between 6 and 8. The rebuttal addressed my primary concerns, and I view this paper as a good contribution. That said, LLMs are not my area of expertise, and my familiarity with the latest advancements in this domain is limited to fully assess the paper's novelty and impact. Therefore, while I support accepting this paper, I'm not confident enough to raise my rating to 8.\"}", "{\"title\": \"Response to Reviewer qLBs 3/4\", \"comment\": \"**qLBs-Q4: LLMs have been used to translate natural language to planning problems. Similarly, the mapping from planning to SMT is well known in the planning literature. So, is the novelty limited to combining the two ideas together?**\", \"qlbs_a4\": [\"We argue that LLMFP is more than translating natural language to planning problems. Specifically,\", \"**Simply translating from natural language to planning problems and using SMT solvers does not work.** We added a new baseline approach Code_SMT, where we directly ask LLMs to translate and encode the natural language task description as a planning problem in SMT format and use SMT solvers to solve the problem. Code_SMT achieves an average of 2.7% and 62.4% for multi-constraint tasks, and 1.0% and 0.0% for multi-step tasks across two LLMs GPT-4o and Claude 3.5 Sonnet.\", \"**Existing works using LLMs as translators need task-specific examples.** As we mentioned in the paper, [1,4,6] leverages LLMs as translators to convert problems into fixed formats(like PDDL or JSON) and input them to external planners. They accomplish this by giving LLMs example input-output pairs under the same contexts and leveraging LLMs as pure translators.\", \"**LLMFP is a framework that enables LLM agents to understand and formalize planning problems from natural language descriptions.** When given a planning problem in natural language description, a planning researcher or engineer will formalize the natural language description as a formal optimization problem, and then use appropriate solvers to solve it. LLMFP framework enables LLM agents to act as expert planning engineers. **Without** any task-specific examples, LLMFP starts from understanding and analyzing the problem to generate valid goals, decision variables, and (explicit & implicit) constraints. Then, it summarizes all needed variables and necessary related information, again with no task-specific reference. This process is **non-trivial** and is the key to enabling LLMs to solve problems across **various inherently different tasks** in a **zero-shot setting**.\"], \"table_1\": \"Optimal rate (%) comparison of LLMFP with baselines on 5 multi-constraint problems.\\n| Method | Coffee | Workforce | Facility | Task Allocation | Warehouse | Average |\\n|---------------------------------|--------|-----------|----------|-----------------|-----------|---------|\\n| Direct_GPT-4o | 0.8 | 2.6 | 0.0 | 0.0 | 0.0 | 0.7 |\\n| Direct_o1-preview | 25.9 | 47.6 | 4.8 | 4.0 | 66.0 | 29.7 |\\n| CoT_GPT-4o | 0.0 | 5.6 | 0.0 | 0.0 | 16.0 | 4.3 |\\n| Code_GPT-4o | 17.7 | 75.8 | 53.9 | 0.0 | 8.0 | 31.1 |\\n| Code-SMT_GPT-4o | 0.0 | 10.8 | 0.6 | 0.0 | 2.0 | 2.7 |\\n| LLMFP_GPT-4o | **64.7** | **92.2** | **70.7** | **96.0** | **72.0** | **79.1** |\\n||\\n| Direct_Claude 3.5 Sonnet | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| CoT_Claude 3.5 Sonnet | 7.1 | 0.0 | 0.0 | 0.0 | 14.0 | 4.2 |\\n| Code_Claude 3.5 Sonnet | 59.8 | 71.9 | 47.3 | 0.0 | 42.0 | 44.2 |\\n| Code-SMT_Claude 3.5 Sonnet | 75.6 | 36.8 | **49.7** | 86.0 | 64.0 | 62.4 |\\n| LLMFP_Claude 3.5 Sonnet | **80.5** | **88.7** | 48.2 | **96.0** | **90.0** | **80.7** |\", \"table_2\": \"Optimal rate (%) comparison of LLMFP with baselines on 4 multi-step problems.\\n| Method | Blocksworld | Mystery Blocksworld | Movie | Gripper | Average |\\n|---------------------------------|-------------|----------------------|--------|---------|---------|\\n| Direct_GPT-4o | 41.5 | 0.8 | 85.7 | 0.0 | 32.0 |\\n| Direct_o1-preview | 88.4 | 31.9 | **100.0** | 52.0 | 68.1 |\\n| CoT_GPT-4o | 39.9 | 2.7 | 81.0 | 0.0 | 30.9 |\\n| Code_GPT-4o | 0.0 | 0.3 | 0.0 | 0.0 | 0.1 |\\n| Code-SMT_GPT-4o | 0.0 | 0.0 | 0.0 | 4.0 | 1.0 |\\n| LLMFP_GPT-4o | **96.2** | **77.7** | **100.0** | **76.0** | **87.5** |\\n||\\n| Direct_Claude 3.5 Sonnet | 43.2 | 0.5 | **100.0** | 12.0 | 38.9 |\\n| CoT_Claude 3.5 Sonnet | 52.8 | 2.8 | **100.0** | 28.0 | 45.9 |\\n| Code_Claude 3.5 Sonnet | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| Code-SMT_Claude 3.5 Sonnet | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| LLMFP_Claude 3.5 Sonnet | **93.0** | **98.0** | **100.0** | **76.0** | **91.8** |\"}" ] }
0K0hoNL9sx
Quantifying the similarity of information contained in probabilistic latent spaces
[ "Kieran A. Murphy", "Sam Dillavou", "Danielle Bassett" ]
In contrast to point-based representation spaces, probabilistic representation spaces have a well-defined sense in which they compress information about a dataset. When viewing representation spaces as communication channels, it becomes natural to ask about the similarity of information content of different representation spaces. Starting with classic measures of similarity of hard clustering assignments, we propose a natural modification that generalizes to probabilistic representation spaces. We also propose a practical route toward estimating the similarity measure based on fingerprinting a representation space with a sample of the dataset that is applicable when the transmitted information is only a handful of bits. Equipped with the similarity measures, we build upon model centrality as a signature of unsupervised disentanglement by assessing ``channel centrality'' and finding information fragments that are repeatedly learned in VAE and InfoGAN ensembles. Additionally, we evaluate the diversity of information content of the full latent space over the course of training for ensembles of models, and find a striking difference in homogeneity of information depending on the dataset. Finally, we leverage the differentiability of the proposed method and perform ensemble learning with VAEs by boosting the information content of a set of weak learners incapable of representing the global structure of a dataset.
[ "Information theory", "representation learning", "disentanglement" ]
https://openreview.net/pdf?id=0K0hoNL9sx
https://openreview.net/forum?id=0K0hoNL9sx
ICLR.cc/2025/Conference
2025
{ "note_id": [ "OGiSmtiTk1" ], "note_type": [ "comment" ], "note_created": [ 1728406228140 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1896/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We found a bug in the calculation of the mutual information between full latent spaces (Fig 4), and the relevant results change significantly enough that we feel resubmission is necessary.\"}" ] }
0JwxMqKGxa
Reinforcement Learning on Synthetic Navigation Data allows Safe Navigation in Blind Digital Twins
[ "Ilias Sarbout", "Mehdi OUNISSI", "Dan Milea", "Daniel Racoceanu" ]
Limited access to dedicated navigation data in visually impaired individuals is a significant bottleneck for developing AI-driven assistive devices. For this purpose, we have developped a virtual environment designed to extract various human-like navigation data from procedurally generated labyrinths. Using reinforcement learning and semantic segmentation, we trained a convolutional neural network to perform obstacle avoidance from synthetic data. Our model outperformed state-of-the-art backbones including DINOv2-B in safe pathway identification in real world. In conclusion, despite being trained only on synthetic data, our model successfully extracted features compatible with safe navigation in real-world settings, opening new avenues for visually impaired.
[ "Electronic Travel Aids", "Virtual Environment", "Semantic segmentation", "Reinforcement Learning" ]
https://openreview.net/pdf?id=0JwxMqKGxa
https://openreview.net/forum?id=0JwxMqKGxa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wOnHzaqAc3", "hUfrgU7w7V", "fwTBRaGFbM", "cXfTutUvky", "ZUxEsNqcVO", "JB3LJca0L8", "6MRJjNDIio" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730564046095, 1730599217060, 1730677104361, 1730681296079, 1736937719216, 1730417533937, 1731027103274 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10204/Reviewer_erv3" ], [ "ICLR.cc/2025/Conference/Submission10204/Reviewer_C83X" ], [ "ICLR.cc/2025/Conference/Submission10204/Reviewer_G6da" ], [ "ICLR.cc/2025/Conference/Submission10204/Reviewer_R7jj" ], [ "ICLR.cc/2025/Conference/Submission10204/Authors" ], [ "ICLR.cc/2025/Conference/Submission10204/Reviewer_aN9A" ], [ "ICLR.cc/2025/Conference/Submission10204/Reviewer_HodL" ] ], "structured_content_str": [ "{\"summary\": [\"The authors developed a virtual environment designed to extract various human-like navigation data from procedurally generated labyrinths.\", \"Using reinforcement learning and semantic segmentation, authors trained a convolutional neural network to perform obstacle avoidance from input RGB data. They demonstrated that their model outperformed state-of-the-art backbones including DINOv2-B in safe pathway identification in the real world.\"], \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The authors talk in their article about the need to solve an extremely important and sensitive problem of creating intelligent navigation systems for visually impaired individuals.\\n\\n2. The authors developed and tested NavIndoor, an open-source software for the computationally efficient generation of procedurally generated, obstacle-filled environments, enabling seamless integration with AI systems.\", \"weaknesses\": \"1. The overview of methods in Figure 1 requires some improvement: in the subfigure named \\\"Signal processing method\\\", the authors mention Machine Learning and Neural Networks separately, but Neural network training is also machine learning. The category \\\"Of which trained using blind specific data\\\" also looks strange.\\n\\n2. The Related work section does not contain a single paper from 2024, which is strange. It is necessary to explicitly indicate that there are no such works, or add them to the overview.\\n\\n3. The Q-network architecture proposed by the authors is very simple and it is unclear how it differs from existing models used in modern works on reinforcement learning. The authors should add an explicit mention of the differences in the caption to Figure 4. Please, compare specific aspects of author's architecture to existing models, or to highlight any novel elements that may not be immediately apparent from the figure.\\n\\n4. On page 8, Active Vision Dataset (AVD) is mentioned, but no reference to the source is provided. The authors need to explain what this dataset is.\\n\\n5. Figure 8 does not have labels for the values \\u200b\\u200bon the vertical axis. They should be added.\\n\\n6. In the abstract and introduction, the authors say that their system and method are specifically developed for visually impaired individuals. However, the developed dataset, methodology, and experiments look as if they are solving a general navigation problem typical for intelligent agents (robots, etc.) using data from onboard sensors with a discrete action space. The authors need to explicitly clarify this in the article. Otherwise, the title of the article may mislead the reader.\\n\\n7. The usefulness of the developed solution is questionable due to the poor photorealism of the simulator used and the overly simplified formulation of the navigation problem. Photorealism is generally important for the quality of image-based navigation methods to effectively transfer to real-world environments.\\n\\n8. The English language of the article requires careful checking, for example, \\\"in\\\" does not look entirely correct in the phrase \\\"dedicated navigation data in visually impaired individuals\\\" in the abstract. The text contains unnecessary punctuation marks, for example, several dots in a row. Typo \\\"developed\\\" in the abstract.\", \"questions\": \"1. Why didn't the authors use photorealistic simulators Habitat Sim and AI2Thor, which can solve indoor navigation problems, to train and validate their approach?\\n\\n2. Could the authors explain their rationale for developing NavIndoor rather than using existing environments like Habitat or AI2Thor? Are there specific advantages of NavIndoor for this task that are not provided by these other environments?\\n\\n3. Have the authors considered comparing their approach to more recent reinforcement learning methods, such as those used in Staroverov, A., et al. \\\"Skill fusion in hybrid robotic framework for visual object goal navigation.\\\" Robotics 12.4 (2023): 104? What specific benchmarks or challenges do they think would be most relevant for evaluating their system's performance?\\n\\n4. Why didn't the authors include any form of anonymized open source code in the article or supplementary materials?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper describes a RL method using synthetic data to train a model that can assist blind people in navigating through real-world environments.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The basic idea/concept of using synthetic datasets to train a learning model for navigation is fine and has in fact been done in some of the latest work on model constructions in CV/ML and SLAM in learning-based robotics, as well as many other CV/ML applications for learning-based robotics tasks (e.g. navigation, collision avoidance, driving/steering, etc).\", \"weaknesses\": \"The motivation about suggesting/implying to provide some forms of assistive tools for blind people seems completely irrelevant to the work presented in the paper. This paper presents a method to train a model for indoor navigation using semantic segmentation. There is no clear explanation on how the method is actually to be used by blind people as assistive devices for blind people to use for navigation (sensory substitution devices, as the paper claims). I suggest the authors to clarify the connection between the proposed work and the stated application. Please clearly explain how the semantic segmentation and navigation model could be integrated into an actual sensory substitution device. Please also discuss specific requirements of assistive technologies for the visually impaired that could inform them how to use the proposed research.\\n\\nI did not find the results to be particularly illuminating or superior, considering that the performance is more or less in the same range as existing SLAM algorithms for relatively controlled and simple environments.\\n\\nExamples shown also do not indicate any generalization capability to me. \\n\\nThe description also does not offer any rationale for high robustness either.\", \"questions\": [\"Why did the authors not just use one of the many many indoor environment datasets that are already available (e.g., RoboThor. Matterport3D, Apple's Hypersim, Meta's Ego-Exo4D). Instead, their simulation framework has very very low visual realism which will affect the performance, when moving to real-world video data. Why not just train the model using one of these datasets that has high-quality realism? I'd suggest that the authors compare their approach using their custom environment to results using one or more of the existing datasets mentioned above. Additionally, I'd suggest the authors to discuss the specific advantages of their NavIndoor environment compared to existing datasets, particularly in relation to training navigation models for visually impaired users. User studies should be conducted in these comparisons.\", \"Why did the authors not conduct any user study with blind people, so I'd consider the results pretty useless/irrelevant to the target user groups claimed in the title? I'd suggest that some user studies/evaluations with visually impaired participants for navigating in real world with and w/o this approach to strengthen the paper's claims and relevance to the target application.\", \"There is not enough explanation about what a Sensory Substitution Device is, and this is the entire motivation of their paper. I suspect that most readers in the ICLR community would not know without explanation what a Sensory Substitution Device (SSD) is. I'd suggest that the authors add a dedicated subsection in the introduction or background to define SSDs and explain their relevance to the proposed research, and discuss how neural networks can be applied and used in this context clearly - possibly with some diagrams to illustrate the use case.\", \"While the basic idea of the described approach is not flawed (train a model to do environment navigation, use that model to help users navigate in the real world), the proposed approach don't seem to offer any novelty over a large body of learning-based model construction and SLAM literatures, as well as many similar works published in robot mapping literatures. I'd suggest the authors compare with some of the latest works on SLAM or robot mapping techniques [1], Additionally, authors can more clearly articulate the novel aspects of their method in the context of existing literature on learning-based navigation against other mapping and SLAM\", \"Can the authors perform an extensive comparison and user studies to justify against a large body of existing work on SLAM (see a recent survey in [1]) in the same context for guiding the visually impaired users? Using a semantic segmentation model to navigate through a (synthetic or real) indoor environment, and then evaluate such a model on pre-processed semantically-segmented images from a real-world indoor environment dataset? I'd suggest the authors to conduct a thorough user study on the target group (i.e. visually impaired), to support any meaningful claim on the key contribution of this paper.\", \"In terms of robustness and generalization, what's the insight on why this approach would do any better than existing methods? I'd suggest the authors to provide more empirical evidence and/or theoretical justification for why their synthetic data generation and training approach leads to better robustness or generalization compared to existing methods, such as those mentioned in [1].\", \"[1] Deep reinforcement learning based mobile robot navigation: A review. https://ieeexplore.ieee.org/abstract/document/9409758\"], \"flag_for_ethics_review\": \"['Yes, Other reasons (please specify below)']\", \"details_of_ethics_concerns\": [\"It's not okay to refer to blind people as \\\"blinds\\\" (line ~27)\", \"I find it a little dubious to use 'blind people' as some target user groups without ever getting any input or user evaluation\", \"from the target group.\"], \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors try to overcome limited real-world datasets or environments that are expensive to train by using simulators, but at the same time using limited real-world datasets to incorporate selective learning or domain transfer in the optimization pipeline.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Strengths:\\n\\n1. The figures are easy and intuitive to understand.\\n2. The experiments performed are represented on visuals well.\", \"weaknesses\": \"Weakness:\\n1. The problem motivation is clear to me and the problem formulation is also clear to me. But I don\\u2019t understand the connection between two. I think both of them are independent problems, in the sense that the authors could\\u2019ve just directly posed it as a sim2real navigation problem, whereas visual navigation in the world is: Can\\u2019t it be done so? Please correct me if I\\u2019m wrong.\\n2. My research is in visual navigation, and from what I see, to put it briefly in words, this problem would have made a lot of sense 4-5 years ago when obtaining a real-world policy was expensive to learn in real-world environments. But with current sota visual navigation algorithms and realistic simulators, I think both algorithms trained in a wide variety of simulator data and models that have already incorporated lots of indicative knowledge and priors from large datasets (LLMs, VLMs, RTX, etc.) would generalize well to real-world tasks. I\\u2019d suggest the authors incorporate these models as the baselines instead of models that output some form of representations and then training policy on top of that.\\n3. From what I see, the authors need to spend a bit more time in the manuscript presentation, not that there are too many typos, but I think the formatting and sizes of different figures with the text is not coherent and consistent\\n4. The paper also has a lot of technical flaws in the experiment section; for example, the only strength I see is in table 2, but I fail to even understand what the full form of VCD is, let alone understand the technical aspects. I suggest the authors lay out their contributions well and elaborate on each of the specific contributions.\", \"questions\": \"I think there is quite some work that needs not only the experiments and organization of the paper but also reiterating in the problem formulation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a method and a simulator for training sensory substitution devices aimed at facilitating safe navigation for visually impaired individuals. The authors employ reinforcement learning to handle obstacle avoidance tasks. Training is initially conducted in a simulated environment, where a semantic segmentation camera captures segmentation maps as input. For real-world application, an external segmentation model processes RGB images to generate similar segmentation maps, which are then fed into the navigation model. The authors design a simple, compact model that relies exclusively on segmentation maps, optimizing for computational efficiency. They evaluate the model\\u2019s performance both in simulation and on real-world datasets, comparing it with pre-trained state-of-the-art models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"small and compact model purely trained on segmentation map, good computation efficiency.\", \"use synthetic data and follow a sim2real approach\", \"potential impact on visually impaired individuals.\"], \"weaknesses\": [\"in abstract, line 015, typo \\\"developped\\\" should be \\\"developed\\\"\", \"It might not be necessary to build a stand alone simulation platform to achieve the task, the scene creation and data collection can definitely be accomplished using existing simulators. Many available simulators focus on photorealism but could still be effective for this task, as they have already demonstrated good results in sim and real. Therefore, I'm wondering if it worth the effort to develop a simulator just for this task.\", \"segmentation categories are quite limited, which may not sufficiently capture the rich semantics required in complex, real-world environments.\", \"It might not be a fair comparison to compare the model specifically for navigation task with general-purpose models trained for recognition.\"], \"questions\": [\"I\\u2019m curious about the decision to use a CNN and D3QN rather than exploring more advanced architectures. If on-device computation efficiency is a concern, have you considered using pruning and quantization to optimize more complex models?\", \"Have you tested the model on a portable device? The reported testing on an RTX 4090 at 179 FPS seems beyond the computational needs of the task and may not reflect real-world performance on a SSD, where hardware specifications vary significantly. Since efficiency was a key factor in selecting a simpler model structure, testing on a lower-power or portable device could provide more insight into practical deployment. Alternatively, a wireless solution might offer a way to handle intensive computation remotely, particularly in indoor settings where connectivity is reliable.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The goal of this paper is to propose and evaluate a methodology for learning a vision model that can act as a sensory substitution device for the blind/visually impaired. The paper proposes a method for generating simulation data, which is then used to train a model. This model determines where it is safe to travel, specifically forward, left, right, or backward. The model takes as input a semantic segmentation of the scene and the action history, and outputs the Q-value of each of the 4 actions. The results show that the best model can achieve nearly 75% human performance in simulation, and on the real dataset the output of the value function is strongly correlated with distance to navigation boundary. This suggests that it can act as a good aide for a visually impaired person moving through an indoor space.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"*This paper has the most thorough literature review I have ever seen in a conference paper (109, I counted! As many pages of citations as paper). I in particular really appreciate Figure 2 and the comprehensive classification of prior work on sensory substitution devices. This is great at giving a background on SSDs to an audience which may be more familiar with ML.\\n\\n*The motivation behind this paper is excellent. Helping the visually impaired see and navigate through human environments is incredibly important, and this paper does an excellent job of establishing why we should care about this work.\\n\\n*There are many papers in AI/ML/robotics dedicated to sim2real transfer, but in this paper it works fairly well. The simulation is fairly simplistic (at least visually) and the real evaluation environment uses images taken in the real world, so the transfer is no easy feet. The use of semantic segmentation masks as input to the model likely makes a big difference, however even so it is still non-trivial. I am surprised at the transfer success.\\n\\n*In the Linear Probing section, there is a comparison to several state of the art baselines. Comparing to baselines is incredibly important, so the comparison here is a good thing.\", \"weaknesses\": \"*The biggest weakness with this paper is that the results are just not convincing. The numbers presented are 1) comparison to human-performance on the sim task (human having full visual sight), 2) distance to navigation boundary as compared to output of the value function, and 3) AUC comparison to the baselines (note that AUC is never defined in the paper, and the abbreviation is never clarified (I assume it stands for area under the curve?)). If the goal is to assist visually impaired people navigate, how do these numbers show that?\\nThey are very indirect. Why not try actual navigation tasks with the proposed model and baselines? To convince me this model is actually useful, I need to see more directly applicable results.\\n\\n*The paper claims the value function is an indicator of safety and guidance (Figure 4 caption). This is not well-justified. Why should the arbitrary \\u201cbest possible reward the agent can get from a given state\\u201d be the same as whether or not that state is safe?\\n\\n*There are several missing citations. SegFormer is never cited. Furthermore, the Active Vision Dataset is not cited. Where did it come from? Who collected it? What kind and how much data is there? How are you estimating distance to a navigation boundary?\\n\\n*The creation of the simulated data is not explained well enough. It seems only the Figure 3 caption gives any data on this, and that is not much. How is DFS used to place the obstacles and collectibles? \\n\\n*Also, why are there collectibles? That seems like a random addition to the simulated data. If your goal is to get the agent to explore, there are many ways in the RL literature to motivate exploration (e.g., maximum entropy).\", \"questions\": \"This is more advice than question, but the biggest thing this paper can do to increase its quality is to generate more convincing results. The easiest thing to try would be to put a camera on blindfolded real human participants and have them use the proposed models and baselines and see which allows them to avoid obstacles the best. Alternatively (if the IRB approvals for that are too hard to get), train an RL model that takes as input exactly the same sensory substitution that a visually impaired person would get from the model, and report how well it does at navigation tasks, both with the proposed model and the baselines.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Limited access to specialized navigation data for visually impaired individuals remains a major obstacle in advancing AI-driven assistive devices. To address this challenge, this work introduces a virtual environment specifically designed to generate human-like navigation data from procedurally generated labyrinths. Utilizing reinforcement learning and semantic segmentation, a convolutional neural network was trained to perform obstacle avoidance based on synthetic data. The resulting model surpassed state-of-the-art backbones, including DINOv2-B, in accurately identifying safe pathways in real-world settings. Overall, despite training exclusively on synthetic data, the model successfully extracted features conducive to safe navigation in real-world conditions, potentially paving the way for innovative solutions to assist the visually impaired.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) A significant gap in the literature is identified concerning the use of navigation data to enhance Sensory Substitution Systems.\\n(2) NavIndoor, an open-source software, is introduced for the computationally efficient generation of procedurally generated, obstacle-filled environments, enabling seamless integration with AI systems. NavIndoor supports the efficient creation of large-scale, human-like navigation datasets.\\n(3) The study demonstrates that synthetic data enables the extraction of low-dimensional features for navigation by individuals with visual impairments.\\n(4) It is demonstrated that applying basic morphological operators to synthetic semantic segmentation maps enhances performance in real-world conditions after training.\", \"weaknesses\": \"(1) The model's training primarily relies on synthetic navigation data from procedurally generated environments, which may not fully capture the complexity of real-world conditions. This could introduce limitations in the model\\u2019s generalization to unpredictable real-world scenarios.\\n(2) While the paper claims some level of real-world transferability, there is limited discussion on effective domain adaptation techniques or extensive testing in real environments. This lack of robust domain adaptation could mean the model's performance may vary significantly when exposed to real-world conditions without sufficient adaptation.\\n(3) The study lacks discussion on how well the model\\u2019s navigation cues (such as haptic or auditory feedback) are perceived and utilized by visually impaired users. Testing on real users could provide valuable insights into how user-friendly and effective the system is in practical assistive scenarios.\", \"questions\": \"A lack of specialized navigation data for visually impaired individuals hinders progress in AI-driven assistive devices. To address this, the work presents a virtual environment that generates human-like navigation data using procedurally generated labyrinths. A convolutional neural network, trained with reinforcement learning and semantic segmentation on synthetic data, enables effective obstacle avoidance. But I have some concerns illustrated in the weaknesses section. Looking forward to seeing the response from the author for these questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0JjsZC0w8x
COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement
[ "Yuxi Xie", "Anirudh Goyal", "Xiaobao Wu", "Xunjian Yin", "Xiao Xu", "Min-Yen Kan", "Liangming Pan", "William Yang Wang" ]
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks. However, existing approaches typically implement iterative refinement at the application or prompting level, relying on autoregressive (AR) modeling. The sequential token generation in AR models can lead to high inference latency. To overcome these challenges, we propose **C**ontext-Wise **Or**der-**A**gnostic **L**anguage Modeling (COrAL), which incorporates iterative refinement directly into the LLM architecture while maintaining computational efficiency. Our approach models multiple token dependencies within manageable context windows, enabling the model to perform iterative refinement internally during the generation process. Leveraging the order-agnostic nature of COrAL, we introduce sliding blockwise order-agnostic decoding, which performs multi-token forward prediction and backward reconstruction within context windows. This allows the model to iteratively refine its outputs in parallel in the sliding block, effectively capturing diverse dependencies without the high inference cost of sequential generation. Empirical evaluations on reasoning tasks demonstrate that COrAL improves performance and inference speed, respectively, achieving absolute accuracy gains of $4.6$\% on GSM8K and $4.0$\% on LogiQA, along with inference speedups of up to $3.9\times$ over next-token baselines. Preliminary results on code generation indicate a drop in pass rates due to inconsistencies in order-agnostic outputs, highlighting the inherent quality--speed trade-off.
[ "autoregressive large language modeling", "decoding", "iterative refinement" ]
Reject
https://openreview.net/pdf?id=0JjsZC0w8x
https://openreview.net/forum?id=0JjsZC0w8x
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yb2C5SgGWH", "tFR1voAjLp", "rdm9XiafnD", "mkCx5v2GyZ", "mfRumVQL7V", "knUHJM12wy", "kA3zV7zzrp", "hhwZypEfQm", "hJjYbcLqPe", "XxJi9HyeNz", "Xi7uHjuJvI", "UxtzrTjBBW", "Lmind915Yu", "KEX2zARrZn", "IbF8CwYiHm", "IYhwgTDZ4Q", "H8VU0fJNkX", "GJwLuIKMNJ", "F1fnpNhWZH", "DTZ3KHdugh", "AaLRjCG3nz", "AVMYpiovfS", "7wfl9OIDes", "4tcW6Dm6dV", "1ynP7XIvQ4", "09D5jdgovq" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732159528733, 1732158883406, 1732158705232, 1732288682449, 1732850333837, 1732372613310, 1734695363269, 1732605335181, 1730694478287, 1732599414024, 1730517940150, 1732158089306, 1732157728955, 1732850348744, 1732157792507, 1732498311703, 1737524152277, 1732498249625, 1732498717823, 1732605065691, 1732602461807, 1732498978233, 1730662675128, 1732358030448, 1732500377739, 1730692218214 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Reviewer_LRiM" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Reviewer_oenX" ], [ "ICLR.cc/2025/Conference/Submission11890/Area_Chair_gvtB" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Reviewer_U9ey" ], [ "ICLR.cc/2025/Conference/Submission11890/Reviewer_U9ey" ], [ "ICLR.cc/2025/Conference/Submission11890/Reviewer_LRiM" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Reviewer_DAsr" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Reviewer_oenX" ], [ "ICLR.cc/2025/Conference/Submission11890/Reviewer_U9ey" ], [ "ICLR.cc/2025/Conference/Submission11890/Authors" ], [ "ICLR.cc/2025/Conference/Submission11890/Reviewer_DAsr" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer LRiM\", \"comment\": \"We appreciate your constructive feedback!\\n\\n> The improvement of the CORAL is not generalizable enough\\n\\nWe appreciate the reviewer\\u2019s concern about the generalizability of COrAL. \\n\\nWe would like to highlight that COrAL achieves significant improvements in accuracy and inference speed on various reasoning tasks, spanning from arithmetic reasoning, including GSM8K ($+4.6$%) and MATH ($+2.5$%), to logical reasoning containing LogiQA ($+4.0$%) and ReClor ($+1.5$%). Our comprehensive experiments demonstrated the potential of order-agnostic language modeling to enhance reasoning through internal iterative refinement. On the other hand, our extended experiment on code generation also shows the limitations of COrAL in tasks that require strict syntactic coherence, providing a deeper understanding of the pros and cons of our proposed method.\\n\\nDue to the constraints in computation and data resources, we leave it to future work to explore a broader range of tasks (e.g., instruction following, writing, dialogue, infilling) to probe both the generalizability and specialty of COrAL. We have included a detailed discussion on this in the Limitations of our paper.\\n\\n> it needs to use more GPU memory (and \\\"waste\\\" some computation because of verification and multi-forward) to achieve this. So it is not friendly to equipment that most people use.\\n\\nThanks for raising your concerns about the GPU memory and computational overhead in COrAL. \\n\\nAs discussed in Section 4.3, this overhead scales efficiently relative to the number of predicted positions, with target-aware RoPE applied only to the last layer. Take results on GSM8K for example: \\n\\n| Approach | TFLOPS (per forward pass) | Accuracy (%) | Speed (tokens per second) | Speedup |\\n| :- | :-: | :-: | :-: | :-: |\\n| NT | $2.81$ | $74.1$ | $39.7$ | $1.0\\\\times$ |\\n| Ours | $13.6$ | $75.3$ | $43.4$ | $1.1\\\\times$ |\\n| Ours $_\\\\textrm{w/o verifier}$ | $5.48$ | $72.4$ | $156.8$ | $3.9\\\\times$ |\\n| Ours $_\\\\textrm{w/o multi-forward}$ | $17.9$ | $78.7$ | $14.9$ | $-$ |\\n\\nWith forward and backward context window sizes of $k = 4$, COrAL (w/o verifier) costs $5.48$ TFLOPS per forward pass compared to $2.81$ TFLOPS for next-token prediction. In other words, COrAL predicts $8\\\\times$ number of tokens with less than $2\\\\times$ overhead in computational cost. This indicates the efficiency of COrAL in leveraging available computation resources to accelerate and enhance inference. \\n\\nMoreover, users can adjust the decoding hyperparameters (e.g., context window and block sizes) to suit their device capabilities. For reference, our experiments used a forward and backward context window of $k=4$, a block size of $b=64$, and a maximum sequence length of $512$ on a 40 GB A100 GPU for reasoning tasks.\\n\\n> In eq.8 entropy is always positive, so -H(x) is always negative and exp(-H(x)) is always less than 1. So min(a,aexp(-H(x))) is always aexp(-H(x)).\\n\\nThanks for pointing out the typo. The correct formulation should be $\\\\min(\\\\epsilon, \\\\sqrt{\\\\epsilon}\\\\exp(-H(x))$ instead. We have corrected this typo in the updated manuscript.\"}", "{\"title\": \"Response to Reviewer oenX (2/2)\", \"comment\": \"> It would be really interesting to check how much performance is lost by starting from a pretrained model as compared to full training a method employing coral from scratch. Do you think that some performance is left on the table because you start from a pretrained model?\\n\\nThanks for your insightful suggestions on employing COrAL from scratch through pretraining to finetuning! We agree that pretraining COrAL from scratch could unlock further potential. Our findings on the order-agnostic training tax suggest that **discrepancies between pretraining and fine-tuning objectives can degrade performance**. While our work can be viewed as an initial step to explore the potential to employ order-agnostic modeling for generative language models, we leave it to future work to explore this promising direction of training COrAL from scratch due to computational and resource constraints.\\n\\nExisting work (e.g., Gloeckle et al., 2024) highlights that **pretraining with multi-token prediction yields more accurate models than fine-tuning alone**. They show that pretraining with multi-token prediction allows the additional heads to be much more accurate than a simple finetuning of a next-token prediction model, thus allowing the models to unlock self-speculative decoding\\u2019s full potential. Likewise, Ye et al. (2024) demonstrate the importance of starting from the pretraining stage to learn the skill of error correction, which cannot be acquired by simply applying LoRA finetuning. Following this direction, future work could explore training COrAL from scratch to fully harness its order-agnostic capabilities.\\n\\n> Figure (2) is a little bit unclear to me. Why are there seemingly different offsets for the refinements and why is there not much visual seperation inbetween forward prediction and refinement?\\n\\nThanks for the clarifying question. The offsets are determined by the position of the last fixed token in the sliding decoding block (or the starting position of the current sliding block). Based on both forward and backward dependencies in the generated context, Figure 2 shows that this internal refinement process amends the duplicate \\\"marine\\\" to \\\"organism\\\". Here, the forward prediction and refinement may be close to each other within such a small context window (i.e., $k=3, b=6$). We refer to the case study in arithmetic reasoning in Figure 8 to illustrate how the backward refinement contributes to correcting the wrong tokens generated in forward prediction. \\n\\n\\n> Maybe it would be a good idea to incorporate an application in which this method shines. E.g., by looking into domains that can benefit from the order-agnostic aspect such as protein language modelling.\\n\\nWe appreciate your insight to explore applications that could benefit from the order-agnostic aspect of COrAL! While our current focus is on reasoning tasks to validate COrAL\\u2019s ability to capture multiple dependencies efficiently, exploring domains like protein modeling represents an exciting future direction. We hope to investigate COrAL\\u2019s specialization and generalizability across broader tasks as part of future work.\\n\\n---\\nSean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, Yejin Choi: Generating Sequences by Learning to Self-Correct. ICLR 2023\\n\\nAman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, Peter Clark: Self-Refine: Iterative Refinement with Self-Feedback. NeurIPS 2023\\n\\nNoah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, Shunyu Yao: Reflexion: language agents with verbal reinforcement learning. NeurIPS 2023\\n\\nKarl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, John Schulman: Training Verifiers to Solve Math Word Problems. CoRR abs/2110.14168 (2021)\\n\\nHunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, Karl Cobbe: Let's Verify Step by Step. ICLR 2024\\n\\nShengnan An, Zexiong Ma, Siqi Cai, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, Weizhu Chen: Can LLMs Learn From Mistakes? An Empirical Study on Reasoning Tasks. EMNLP (Findings) 2024: 833-854\\n\\nTianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao: Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads. ICML 2024\\n\\nFabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozi\\u00e8re, David Lopez-Paz, Gabriel Synnaeve: Better & Faster Large Language Models via Multi-token Prediction. ICML 2024\\n\\nTian Ye, Zicheng Xu,Yuanzhi Li, Zeyuan Allen-Zhu. Physics of language models: Part 2.2, how to learn from mistakes on grade-school math problems, 2024.\"}", "{\"title\": \"Response to Reviewer oenX (1/2)\", \"comment\": \"Thanks for your insightful and thorough comments!\\n\\n> Are the AR baselines also fine-tuned on the tasks?\\n\\nYes, the AR baselines are based on the same order-agnostic model, fine-tuned on tasks with both forward prediction and backward refinement objectives (Section 2). To ensure fair comparison, we adopted a two-stage training protocol for AR-LLMs to endow them with order-agnostic abilities without pretraining. Specifically:\\n\\n**First Stage**. Fine-tune the last layer with target-aware RoPE, adapting next-token prediction to multi-position prediction.\\n\\n**Second Stage**. Freeze and gradually unlock the last layer during full fine-tuning to stabilize the autoregressive loss.\\n\\nWhile effective for stabilizing forward prediction, this method incurs an order-agnostic training tax, with next-token prediction performance dropping from $77.0$% (baseline) to $76.5$% and $74.1$% after the first and second stages, respectively. This likely arises from differences in training objectives and corrupted data incorporation during fine-tuning. Exploring pretraining with order-agnostic modeling could mitigate this issue. We leave this to future work due to the computational constraint.\\n\\nPlease find a more detailed discussion regarding the challenges in training and optimization in the Limitations section of our updated manuscript.\\n\\n> It would be interesting to compare this ablation with a method from the related work that has a similar computational cost.\\n\\nThanks for your thoughtful suggestion! To ensure a fair comparison, we included self-consistency (SC) using $4$ outputs sampled by the next-token prediction baseline. Note that for other related works on iterative refinement, we instead compare the performance gains across different approaches, considering the discrepancies in base models and training data. Below are the results on GSM8K.\\n\\n| Approach | Base Model | Accuracy (%) | $\\\\Delta$ (Accu) | Speed (tokens per second) | Cost (seconds per sample) | \\n| :- | :-: | :-: | :-: | :-: | :-: |\\n| Base (Welleck et al. 2023) | GPT-3 Instruct | $36.8$ | $-$ | $-$ | $-$ |\\n| Self-Correct (Welleck et al. 2023) | GPT-3 Instruct | $45.9$ | $+9.1$ | $-$ | $-$ |\\n| Self-Refine (Madaan et al. 2023) | GPT-3 Instruct | $55.7$ | $+18.9$ | $-$ | $-$ |\\n||\\n| SFT (An et al. 2024) | Llama2-7B | $55.0$ | $-$ | $-$ | $-$ |\\n| + Learning from mistakes (An et al. 2024) | Llama2-7B | $57.1$ | $+2.1$ | $-$ | $-$ |\\n||\\n| NT | COrAL (Mistral-7B) | $74.1$ |$-$ | $39.7$ | $3.67$ | $-$ | $-$ |\\n| SC@$4$ | COrAL (Mistral-7B) | $76.2_{\\u00b10.4}$ | $+2.1$ | $37.8$ | $15.50$ |\\n| Ours $_\\\\textrm{w/o multi-forward}$ | COrAL (Mistral-7B) | $78.7$ | $+4.6$ | $14.8$ | $9.81$ |\\n\\nThe results show that our approach (w/o multi-forward) consistently outperforms both the next-token and SC baselines, achieving higher accuracy while consuming less time per sample. Furthermore, compared with other mistake-correction fine-tuning approaches using the base model of the same size, our method achieves a large performance gain (e.g., $+4.6$% compared to $+2.1$% on Llama2-7B).\\n\\n> comparison to refinement methods from the related work is missing\\n\\nOur approach introduces a novel framework by converting the output-level sequential refinement into an internal order-agnostic decoding process. Below is a comparison with existing methods:\\n\\n**Prompt Engineering**. Works such as Self-Refine (Madaan et al. 2023) and Reflexion (Shinn et al., 2024) exploit incorrect attempts in historical data to improve the performance of a frozen LLM. In contrast, our method enables the model to directly correct generated mistakes via backward refinement.\\n\\n**Verifier Training**. Such approaches (Cobbe et al., 2021; Lightman et al., 2023) train separate models to re-rank outputs. These strategies are orthogonal to our method, which \\u200b\\u200bcould further enhance COrAL by providing stronger verification mechanisms.\\n\\n**Mistake-Correction Fine-Tuning**. An et al. (2024) demonstrate that the mistake reasoning data can be directly utilized through a standard fine-tuning approach. However, this approach relies on AR-LLMs and sequential prediction, whereas COrAL introduces a fundamentally new paradigm by enabling mistake correction through backward dependencies.\\n\\nWe have added a detailed discussion of these distinctions in Appendix B.\\n\\n> pseudo-code for Algorithm 1 is provided without walking through the pseudo-code\\n\\nWe appreciate your comment for elaboration on Algorithm 1. We have included a brief walkthrough of Algorithm 1 in Section 3 (lines 247\\u2013253) to clarify its implementation.\\n\\n> a somewhat non-standard notation for expected values is used. their subscripts seem to be used much like in summations, but usually subscripts at an expected value are used to indicate over which distribution the expectation is taken: e.g., equation (1) and equation (3)\\n\\nThank you for pointing this out. We have revised the notation in equations (1) and (3) to align with standard conventions.\"}", "{\"comment\": \"Thanks for reviewer's reply. I will raise my confidence score.\"}", "{\"comment\": \"Dear Reviewer LRiM,\\n\\nThanks again for your valuable feedback and recognition of our contributions. \\n\\nWith the extended discussion period, we would like to engage in further discussions and address any remaining questions or concerns!\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Thanks for the thorough response! I increased my score.\"}", "{\"metareview\": [\"**Summary:**\", \"The paper introduces COrAL, a framework designed to integrate iterative refinement directly into LLMs while maintaining computational efficiency. COrAL addresses limitations of autoregressive models, such as high inference latency and sequential dependency, by modeling token dependencies within context windows in an order-agnostic manner. It combines forward multi-token prediction and backward reconstruction to enable efficient sliding block-wise decoding, achieving improvements in both accuracy and inference speed.\", \"**Strength:**\", \"COrAL\\u2019s order-agnostic framework enables simultaneous forward and backward processing.\", \"The method enhances dependency modeling with minimal computational overhead.\", \"The authors conduct extensive ablation studies, showcasing the robustness of their approach in balancing speed and performance.\", \"**Weakness:**\", \"The paper lacks discussion and citations for related works.\", \"While the empirical results are robust, the paper could benefit from deeper theoretical discussions to provide insights beyond experimental observations.\", \"While COrAL excels in math and logic tasks, it struggles with code generation, limiting its general applicability.\"], \"additional_comments_on_reviewer_discussion\": \"After carefully reviewing all discussion threads, I conclude that this paper demonstrates sufficient merit. Three out of four reviewers provided positive feedback, and one reviewer expressed a negative opinion. However, this paper slightly falls short of the acceptance bar for the ICLR conference, especially when compared to the higher ratings received by other submissions.\"}", "{\"comment\": \"Thank you for reviewing our paper and considering our rebuttal. We sincerely appreciate your valuable feedback and recognition of our contribution.\"}", "{\"summary\": \"This paper proposes Context-Wise Order-Agnostic Language Modeling (COrAL), which incorporates iterative refinement directly into the LLM architecture while maintaining computational efficiency. Empirical evaluations on reasoning tasks demonstrate that COrAL improves performance and inference speed, and results on code generation indicate a drop in pass rates due to inconsistencies in order-agnostic outputs, highlighting the inherent quality\\u2013speed trade-off.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper is well-writen and easy to follow.\", \"The performance on logical reasoning tasks are good.\"], \"weaknesses\": [\"I think this paper is similar to the other type of works, i.e., speculative decoding, what the difference between them?\", \"The noverty is limited, since the specific ways for iterative refinements, the training methods to learn correction, are borrowed from previous works.\", \"The significant one: this method seems to only work in specific tasks, the logical reasoning tasks in this paper. However, we always focus on the generalization of current language models, i.e., the competitive on a wide range of tasks.\"], \"questions\": [\"If the way to generate tokens in the first step is different from that in the process of iterative refinements? Are there any better methods to generate draft tokens.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response, and it truly addresses part of my concerns. Actually, I think 3 score is indeed low of this paper, but the current form of paper does not reach 5. Since there is no 4 score option, I'm sorry I can only keep my score.\"}", "{\"summary\": \"The authors propose a new decoding method, called CORAL, which can speed up the decoding process and maintain (or upraise) the performance of the model in some tasks. CORAL has 2 parts: prediction and verification. The experiment shows that the verification part can help the model to generate more accurate results. CORAL also designed a strategy named \\\"multi-forward\\\" to speed up the decoding process (although it may hurt the performance). The result shows that the CORAL is useful in math problems but is useless in the code generation task.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The topic of the paper is interesting, transformer-based model do have the problem of slow decoding speed.\\n\\n2. It make a good balance between the speed and the performance.\", \"weaknesses\": \"1. The improvement of the CORAL is not generalizable enough. It only works well in the some math/logic problems but not in the code generation task.\\n\\n2. Although the speed of the decoding process is improved, it needs to use more GPU memory (and \\\"waste\\\" some computation because of verification and multi-forward) to achieve this. So it is not friendly to equipment that most people use.\", \"questions\": \"In eq.8 entropy is always positive, so -H(x) is always negative and exp(-H(x)) is always less than 1. So min(a,a*exp(-H(x))) is always a*exp(-H(x)).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer DAsr\", \"comment\": \"We appreciate the reviewer's insightful suggestions!\\n\\n> Lack of survey of some (maybe kind of obsolete yet important) existing methods: This method resembles Scheduled Sampling in multiple aspects, yet it severely lacks the acknowledgement of this method (no citation nor even mentioning). It shares many ideas and practices with SS, necessitating a deeper analysis on the connection and differences between the method\\n\\nThanks for bringing up the work on scheduled sampling (Bengio et al. 2015). We acknowledge the similarities and have added the following discussion in Appendix B to address the connections and differences:\\n\\n**Motivation and Target Problems to Tackle**. Scheduled sampling aims to mitigate the discrepancy between training and inference, while COrAL introduces a generalized framework to model various dependencies within the context. The motivations and target problems are distinct.\\n\\n**Training: Mixed Training Schemes**. Scheduled sampling gradually transitions from teacher-forcing to self-generated inputs using curriculum learning. In contrast, COrAL decomposes the order-agnostic training into two separate objectives: forward prediction with ground-truth input and backward reconstruction with corrupted input. Inspired by scheduled sampling, future iterations of COrAL could explore curriculum strategies to gradually increase corrupted input ratios, enhancing robustness and stability.\\n\\n**Inference: Order-Agnostic v.s. Sequential Decoding**. Scheduled sampling is designed for sequential decoding at inference, while COrAL employs blockwise order-agnostic decoding, enabling multi-token forward prediction for speedup and backward refinement for quality improvement.\\n\\n**Experiment: Impact of Mixing Ratio**. Both methods highlight the importance of balance. Similar to the findings of the scheduled sampling that pure self-generated training performs poorly, we observe that a high corruption ratio (e.g., $0.5$) in COrAL significantly degrades performance, underscoring the need for carefully designed corruption schemes.\\n\\n> Lack of deeper discussion on the theoretical insights\\n\\nWe appreciate your suggestion to expand on the theoretical aspects of COrAL. While our focus is on the empirical exploration of order-agnostic modeling, we align our findings with theoretical insights from Zhang et al. (2024), which compare autoregressive and masked paradigms. Below, we summarize the relevant theoretical parallels:\\n\\n**Enhanced Connectivity in Multi-Token Predictions**. From a graph perspective, we consider the co-occurrence matrix of conditional and target text, where the nodes and the edge weights represent the texts and their joint probability, respectively. Zhang et al. (2024) show that superior downstream performance is linked to enhanced connectivity in the co-occurrence matrix of conditional and target text. Likewise, COrAL\\u2019s multi-token dependencies improve connectivity compared to AR models, explaining the effectiveness of our ensemble verification policy (Eq. 6). This aligns with our empirical results, where the ensemble verification mechanism (Ours w/o multi-forward) significantly boosts reasoning task performance across various datasets.\\n\\n**Impact of Aggressive Mask Ratios**. Larger mask ratios cluster samples more effectively in feature space, as demonstrated in Zhang et al. (2024). This theoretically explains the impact of the corruption ratio on model performance in COrAL. This explains why moderate corruption ratios (e.g., $0.125$\\u2013$0.25$) in COrAL enhance reconstruction, as shown in Figure 6(b).\\n\\n**Autoregressive Models Obtain a Smaller Error Compared to Masked Models**. AR models inherently minimize errors more effectively in generation tasks. However, consistency across output distributions can bridge this gap for masked models. This indicates the importance of the consistency of different positions for better masked modeling. Likewise, our ablation study on the corruption granularity in Figure 6(a) demonstrates how to maintain this consistency by balancing corruption piece lengths and the maximum context window size $k=8$, as short corrupted pieces (e.g., $1, 2$) may break up the coherence of the sequence while longer pieces (e.g., $8$) require longer dependencies that may not be available.\\n\\n---\\nSamy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer: Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks. NIPS 2015: 1171-1179\\n\\nQi Zhang, Tianqi Du, Haotian Huang, Yifei Wang, Yisen Wang: Look Ahead or Look Around? A Theoretical Comparison Between Autoregressive and Masked Pretraining. ICML 2024\"}", "{\"title\": \"Response to Reviewer U9ey (1/2)\", \"comment\": \"Thanks for taking the time to review our work. Below, we clarify the contributions and novelty of our work first and address your comments.\\n\\n---\", \"we_hope_to_highlight_our_research_focus_first_to_clarify_our_main_contribution_as_follows\": \"* **Introduction of COrAL**: We present a language modeling approach that unifies denoising with context-wise order-agnostic language modeling, effectively combining the strengths of AR and NAR models.\\n* **Development of Blockwise Order-Agnostic Decoding**: We propose an efficient decoding strategy that enables multi-token prediction and backward reconstruction within context windows, enhancing both performance and inference speed.\\n* **Application of Target-Aware Positional Encoding**: We employ a generalized Rotary Position Embedding in the Transformer architecture to maintain target-aware positional information without modifying the model's architecture or necessitating extensive pretraining.\\n* **Empirical Validation**: We demonstrate through comprehensive experiments that COrAL achieves significant improvements in accuracy and inference speed on reasoning tasks, while also discussing the limitations observed in code generation tasks.\\n* Our approach offers a promising direction for developing more efficient and capable large language models by effectively capturing local dependencies within context windows while maintaining computational efficiency.\\n---\\n\\n> I think this paper is similar to the other types of works, i.e., speculative decoding, what is the difference between them?\\n\\nThanks for the clarifying question. Our proposed decoding method, blockwise order-agnostic decoding, differs from speculative decoding in the following key aspects:\\n\\n**No Separate Draft Model**: The typical speculative decoding approach (Chen et al., 2023; Leviathan et al., 2023) employs a smaller, faster draft model to propose multiple continuations, which the larger target model then verifies and accepts. This inherently adds memory overhead and limits distributional deployment, while our approach leverages the order-agnostic capability of COrAL to generate the draft tokens using the same model, ensuring scalability and efficiency.\\n\\n**Orthogonal Strategy of Draft Token Generation Compared to Self-Speculative Decoding**: Self-speculative decoding (Zhang et al. 2024) uses the same model for drafting by selectively skipping certain intermediate layers. However, this may take hours to configure and limit interpretability and generalization. COrAL instead uses order-agnostic generation to break sequential dependencies in AR-LLMs, enabling efficient multi-token drafting.\\n\\n**Quality Improvement via Backward Refinement**: Previous works of speculative decoding mainly focus on inference acceleration with lightweight drafting, while our approach combines speed-up with quality improvements through iterative backward refinement of generated content.\\n\\nWe have clarified these differences in Section 3 (lines 237-245) and have now expanded the discussion in Appendix B to provide further detail.\\n\\n> The novelty is limited, since the specific ways for iterative refinements, the training methods to learn correction, are borrowed from previous works\\n\\nWe appreciate your concern about the novelty. Our approach introduces a novel framework by converting the output-level sequential refinement into an internal order-agnostic decoding process. Below is a comparison with existing methods:\\n\\n**Prompt Engineering**. Works such as Self-Refine (Madaan et al. 2023) and Reflexion (Shinn et al., 2024) exploit incorrect attempts in historical data to improve the performance of a frozen LLM. In contrast, our method enables the model to directly correct generated mistakes via backward refinement.\\n\\n**Verifier Training**. Such approaches (Cobbe et al., 2021; Lightman et al., 2023) train separate models to re-rank outputs. These strategies are orthogonal to our method, which \\u200b\\u200bcould further enhance COrAL by providing stronger verification mechanisms.\\n\\n**Mistake-Correction Fine-Tuning**. An et al. (2024) demonstrate that the mistake reasoning data can be directly utilized through a standard fine-tuning approach. However, this approach relies on AR-LLMs and sequential prediction, whereas COrAL introduces a fundamentally new paradigm by enabling mistake correction through backward dependencies.\\n\\nWe have added a detailed discussion of these distinctions in Appendix B.\"}", "{\"comment\": \"Dear Reviewer DAsr,\\n\\nThanks again for your valuable feedback and recognition of our contributions. \\n\\nWith the extended discussion period, we would like to engage in further discussions and address any remaining questions or concerns!\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer U9ey (2/2)\", \"comment\": \"> This method seems to only work in specific tasks, the logical reasoning tasks in this paper. However, we always focus on the generalization of current language models, i.e., the competitive on a wide range of tasks.\\n\\nWe appreciate the reviewer\\u2019s concern about the generalizability of COrAL. \\n\\nWe would like to highlight that COrAL achieves **significant improvements in accuracy and inference speed on various reasoning tasks**, spanning from arithmetic reasoning, including GSM8K ($+4.6$%) and MATH ($+2.5$%), to logical reasoning containing LogiQA ($+4.0$%) and ReClor ($+1.5$%). Our comprehensive experiments demonstrated the potential of order-agnostic language modeling to enhance reasoning through internal iterative refinement. On the other hand, our extended experiment on code generation also shows the limitations of COrAL in tasks that require strict syntactic coherence. This evaluation **highlights both the strengths and limitations of COrAL, providing valuable insights for future research**. \\n\\nDue to computational and data constraints, we leave the exploration of other tasks (e.g., instruction following, dialogue, infilling) to future work to further investigate the generalizability and specialty of COrAL.\\n\\n> If the way to generate tokens in the first step is different from that in the process of iterative refinements, are there any better methods to generate draft tokens?\\n\\nBesides order-agnostic generation, alternative strategies such as separate draft models (Chen et al., 2023; Leviathan et al., 2023) or self-speculative decoding (Zhang et al., 2024) could be employed. However, these methods primarily focus on inference speed-up and are inherently tied to AR-LLMs, making them **fundamentally different** from COrAL, which integrates multi-token generation and backward refinement within an order-agnostic paradigm. Below is a conceptual comparison:\\n\\n| Approach | Additional Draft Model (Scalability) | AR-based (Multi-Dependency) | Training |\\n| :- | :-: | :-: | :- |\\n| SpecDecoding | \\u2713 ($\\\\downarrow$) | \\u2713 (\\u2717) | to train the draft model |\\n| Self-SpecDecoding | \\u2717 ($\\\\uparrow$) | \\u2713 (\\u2717) | to determine the layers to skip |\\n| COrAL | \\u2717 ($\\\\uparrow$) | \\u2717 (\\u2713) | to learn order-agnostic modeling |\\n\\n---\\nCharlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, John Jumper: Accelerating Large Language Model Decoding with Speculative Sampling. CoRR abs/2302.01318 (2023)\\n\\nYaniv Leviathan, Matan Kalman, Yossi Matias: Fast Inference from Transformers via Speculative Decoding. ICML 2023: 19274-19286\\n\\nJun Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, Sharad Mehrotra: Draft& Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. ACL (1) 2024: 11263-11282\\n\\nAman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, Peter Clark: Self-Refine: Iterative Refinement with Self-Feedback. NeurIPS 2023\\n\\nNoah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, Shunyu Yao: Reflexion: language agents with verbal reinforcement learning. NeurIPS 2023\\n\\nKarl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, John Schulman: Training Verifiers to Solve Math Word Problems. CoRR abs/2110.14168 (2021)\\n\\nHunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, Karl Cobbe: Let's Verify Step by Step. ICLR 2024\\n\\nShengnan An, Zexiong Ma, Siqi Cai, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, Weizhu Chen: Can LLMs Learn From Mistakes? An Empirical Study on Reasoning Tasks. EMNLP (Findings) 2024: 833-854\"}", "{\"title\": \"Further Response to Reviewer U9ey (2/2)\", \"comment\": \"> **Key Contributions of COrAL**. I find the key points of COrAL is fuzzy \\u2026 I think the actual challenges that COrAL can solve are those specially under the iterative refinements paradigm for pre-trained AR models, but this paradigm is not the main stream of these models.\\n\\nThanks for the clarifying question regarding the key contributions of COrAL. While the AR architecture does not inherently support iterative refinement compared to NAR models, AR modeling becomes the de facto standard for generative language modeling. Existing inference-time scaling methods, often implemented at the application or prompting level, rely heavily on AR decoding. This brings inherent limitations, such as potential inductive biases and high inference latency due to the monotonic dependency in next-token prediction.\\n\\nAs discussed above, **COrAL addresses above challenges in AR-LLMs by introducing efficient iterative refinement tailored for complex tasks**. Our results (e.g., w/o verifier and Figures 1 and 5) demonstrate that COrAL achieves significant speedup on reasoning tasks, such as GSM8K, outperforming speculative sampling [3] on 7B models.\\n\\nRegarding the sub-optimal performance of AR models, we have also demonstrated that our approach (w/o multi-forward) consistently outperforms both the next-token and self-consistency (SC) baselines, **achieving higher accuracy while consuming less time per sample compared to SC** (See Table 1 in our updated version), indicating the promise of COrAL for reducing computational cost and enhancing performance.\\n\\nWe hope this addresses the reviewer's concerns and we look forward to their response!\\n\\n---\\n\\n[1] Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer: Mask-Predict: Parallel Decoding of Conditional Masked Language Models. EMNLP/IJCNLP (1) 2019: 6111-6120\\n\\n[2] Yisheng Xiao, Juntao Li, Zechen Sun, Zechang Li, Qingrong Xia, Xinyu Duan, Zhefeng Wang, Min Zhang: Are Bert Family Good Instruction Followers? A Study on Their Potential And Limitations. ICLR 2024\\n\\n[3] Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang: EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty. ICML 2024\\n\\n[4] Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozi\\u00e8re, David Lopez-Paz, Gabriel Synnaeve: Better & Faster Large Language Models via Multi-token Prediction. ICML 2024\\n\\n[5] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, Peter Clark: Self-Refine: Iterative Refinement with Self-Feedback. NeurIPS 2023\\n\\n[6] Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, Shunyu Yao: Reflexion: language agents with verbal reinforcement learning. NeurIPS 2023\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Further Response to Reviewer U9ey (1/2)\", \"comment\": \"Thanks for your constructive feedback. We appreciate the opportunity to clarify the contributions and advantages of COrAL. Below, we address each of your points in detail.\\n\\n> **Efficient Iterative Refinement and Speedup**. According to your title, COrAL is specially designed for efficient iterative refinement, but the results do not demonstrate this, except on LogiQA w/o verifier \\u2026 thus I do not think this is efficient.\\n\\nCOrAL aims to reduce inference latency in AR-LLMs by performing iterative refinement internally via order-agnostic generation rather than relying on AR-based prompting. Its efficiency stems from multi-token prediction and backward reconstruction, achieving up to $3.9\\\\times$ (GSM8K) over the AR baseline without significant performance degradation. \\n\\nTo further probe this efficiency, we conducted two experiments: \\n* **Performance\\u2013Speed Trade-offs**. Similar to [1], we illustrate the performance\\u2013speed trade-offs of COrAL in Figure 5(a). By increasing the decoding block size, COrAL can **approach the maximum speedup rate about $4\\\\times$ while retaining the generation quality**. Moreover, unlike CMLM models [1,2], which face challenges with varying-length predictions, COrAL **maintains the flexibility of AR decoding to handle variable-length generation**.\\n* **Scaling Performance with Iterative Refinement**. As shown in Figure 1, COrAL enables **faster performance scaling than the inference cost** as the iteration time increases. Leveraging backward dependencies, it outperforms forward-only refinement, reaching a higher plateau of accuracy at a reasonable computational cost.\\n\\n> **Advantages Over Speculative Decoding**. Compared with works of speculative decoding [3,4], they can also achieve >2.0x speedup without sacrificing performance \\u2026 but I can not find the advantages of COrAL.\\n\\nCOrAL demonstrates distinct advantages over speculative decoding, particularly in reasoning tasks. According to Table 1 in [3], on GSM8K, speculative sampling (EAGLE) achieves speedups up to $3.01\\\\times$ and $2.91\\\\times$ on 7B models Vicuna-7B and Llama2-Chat-7B, respectively. By comparison, COrAL achieves $>3.5\\\\times$ speedup with comparable accuracy ($73.0$%), as shown in Figure 5(a). Additionally, COrAL eliminates the need for a separate draft model, reducing memory overhead.\\n\\nOn the other hand, we acknowledge that COrAL faces limitations on tasks requiring strict syntactic coherence, such as code generation. We attribute this to the discrepancy between pretraining and fine-tuning objectives, which affects the quality of draft tokens in order-agnostic generation. Addressing this limitation via pretraining with order-agnostic modeling remains an important direction for future work (Appendix E). We refer to [4], which explores pretraining with multi-token prediction and highlights the potential benefits of incorporating COrAL into the pretraining stage to enhance its capabilities.\\n\\n> The strengths of AR models are the better capturing of the target token dependency thanks to their simple autoregressive modeling paradigm \\u2026 I think the framework is more likely to be a tool to transform the pre-trained AR models to support multi-token forward prediction and verification.\\n\\nThanks for the recognition of **COrAL\\u2019s contribution as a tool to transform pre-trained AR models for multi-token prediction and verification**. While iterative refinement is not inherently a strength of AR models, the prompting-level refinement paradigm has proven effective for enhancing AR-LLMs on complex tasks [5,6]. COrAL addresses the latency challenges of this paradigm by enabling efficient multi-token prediction and backward reconstruction, thus **extending AR models' capabilities for iterative refinement**.\", \"we_also_respectfully_note_that_other_reviewers_have_recognized_the_contributions_of_coral_in_terms_of_introducing_the_novel_paradigm_to_address_limitations_in_ar_llms\": \"reviewer `DAsr` highlighted that _\\\"COrAL\\u2019s order-agnostic framework allows simultaneous forward and backward processing, significantly reducing inference latency compared to traditional autoregressive models\\\"_, reviewer `oenX` mentioned that we _\\\"propose an interesting paradigm\\\"_ and _\\\"introduce a novel decoding strategy combining autoregressive modelling with ROBERTA-like order agnostic refinement\\\"_, and reviewer `LRiM` emphasized that the _\\\"topic of the paper is interesting\\\"_, where _\\\"transformer-based model do have the problem of slow decoding speed\\\"_.\"}", "{\"title\": \"Looking forward to further feedback\", \"comment\": \"Dear Reviewer DAsr,\\n\\nThank you again for your valuable comments and the effort you put into reviewing our work! We have carefully addressed the main concerns in detail and hope you find our responses satisfactory, as other reviewers have. As the discussion phase is about to close, we look forward to hearing any additional feedback you may have. We will be happy to clarify or provide additional details.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Thanks for the update and valuable feedback\", \"comment\": [\"Thanks for engaging in discussion with us. We are glad that our responses have addressed part of your concerns. Below, we outline updates to the manuscript that may further address your concerns:\", \"**Introduction (lines 101-102)**. We highlighted our contribution of introducing a tool to transform pre-trained AR models for multi-token prediction and verification, as suggested by the reviewer.\", \"**Table 1**. We added a comparison to self-consistency, which has a comparable computational cost as the w/o multi-token variant of our decoding approach. This demonstrates the efficiency of the iterative refinement mechanism in COrAL, achieving higher accuracy while consuming less computation cost.\", \"**Appendix A**. We made a conceptual comparison between COrAL, AR, and NAR architectures to highlight the advantages of COrAL, such as variable-length generation, multi-dependency, and efficient iterative refinement.\", \"**Appendix E**. We provided extensive comparisons to other inference approaches, including speculative sampling and iterative refinement, to clarify the distinctions and strengths of COrAL.\", \"**Limitations**. We expanded the discussion on training challenges, optimization issues, and COrAL\\u2019s limitations on tasks requiring strict syntactic coherence. This enhances clarity and provides insights for future directions to explore COrAL\\u2019s potential and generalizability with more computational resources.\", \"We believe the additional experiments, analysis, and discussion have significantly improved the quality and clarity of our submission. We hope these enhancements provide sufficient grounds for reconsideration of the score. Please let us know if you have additional questions or concerns.\"]}", "{\"comment\": \"I have read the response and would like to keep my scores. Thanks.\"}", "{\"title\": \"Thanks for the update and valuable feedback\", \"comment\": \"Thanks for appreciating our response and for updating the score. We greatly value your feedback and are happy to know that you found the response satisfactory.\"}", "{\"summary\": \"The authors introduce a novel decoding strategy combining autoregressive modelling with ROBERTA-like order agnostic refinement. Given a partial sequence, they predict multiple tokens ahead, which they subsequently refine using ROBERTA-like denoising autoencoder. The authors see performance improvements on GSM8K and LogiQA and poor performance on code generation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"the authors propose an interesting paradigm and show that it has promise for reducing computational cost and enhancing performance in certain settings\", \"the method is applicable to autoregressive pretrained language models and seems to improve their performance in certain settings\", \"the authors provide a quite extensive ablation study for their method\", \"the paper contains some beautiful figures such as figure (2) and (3). Even though Figure (2) is a little bit unclear to me. Why are there seemingly different offsets for the refinements and why is there not much visual seperation inbetween forward prediction and refinement?\"], \"weaknesses\": [\"pseudo-code for Algorithm 1 is provided without walking through the pseudo-code\", \"in the experimental section the baselines are not described in enough detail, just AR. the proposed method requires finetuning, are the AR baselines also finetuned on the tasks?\", \"the by-far-best performance is achieved using the w\\\\o multi-head prediction ablation, which is not the proposed method and thus weird. I assume this variant suffers from increased computational cost compared to the proposed method. It would be interesting to compare this ablation with a method from the related work that has a similar computational cost.\", \"comparison to refinement methods from the related work is missing\", \"a somewhat non-standard notation for expected values is used. their subscripts seem to be used much like in summations, but usually subscripts at an expected value are used to indicate over which distribution the expectation is taken: e.g., equation (1) and equation (3)\"], \"questions\": \"It would be really interesting to check how much performance is lost by starting from a pretrained model as compared to full training a method employing coral from scratch. Do you think that some performance is left on the table because you start from a pretrained model?\\n\\nIn the main result part, to increase my rating I would like to see a comparison to other interative refinement methods that have a similar computational cost as the w/o multi-token prediction variant of the proposed method and also a more detailed description of the autoregressive baseline.\", \"suggestion\": \"Maybe it would be a good idea to incorporate an application in which this method shines. E.g., by looking into domains that can benefit from the order-agnostic aspect such as protein language modelling.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response.\\nI do agree with you that the contribution of Blockwise Order-Agnostic Decoding and Target-Aware Positional Encoding for the , COrAL framework. However, I am still concern that whether COrAL actually effectively combine the strengths of AR and NAR models. \\n- Firstly, according to your title, COrAL is specially designed for efficient iterative refinement, but the results do not demonstrate this, except on LogiQA w/o verifier. Actually, original fully NAR models can achieve >10x speedup, and around 3x with iterative refinements, e.g., CMLM models[1,2]. However, the results of COrAL (the results of ours in Tables) only achieve 1.1x or 1.2x speedup, thus I do not think this is efficient. \\n- Compared with works of speculative decoding [3,4], they can also achieve >2.0x speedup without sacrificing performance, even on code generation tasks. Therefore, I agree with you the difference with speculative decoding, but I can not find the advantages of COrAL. \\n- In my opinion, the strengths of AR models are the better capturing of the target token dependency thanks to their simple autoregressive modeling paradigm, however, the order-agnostic decoding seems to complicates the modeling process and the iterative refinement is also not the strength of AR models. Therefore, I think the framework is more likely to be a tool to transform the pre-trained AR models to support multi-token forward prediction and verification. \\n- Overall, I find the key points of COrAL is fuzzy, which also leads to my concern of COrAL. The authors say that COrAL is designed for overcoming the several challenges of AR models. If the challenges are the slow decoding speed, I think COrAL should at least show advantages (higher speedup) compared with speculative decoding, if the challenges are the sub-optimal sub-optimal performance, COrAL should outperform AR model on the most tasks, not the tradeoff. Therefore, I think the actual challenges that COrAL can solve are those specially under the iterative refinements paradigm for pre-trained AR models, but this \\nparadigm is not the main stream of these models. \\n\\nI keep the score now, but I look forward the further discussion with you. \\n\\n[1] Mask-predict: Parallel decoding of conditional masked language models, EMNLP 2019. \\n[2] Are Bert Family Good Instruction Followers? A Study on Their Potential And Limitations ICLR 2024. \\n[3] Eagle: Speculative sampling requires rethinking feature uncertainty\\n[4] Medusa: Simple llm inference acceleration framework with multiple decoding heads\"}", "{\"title\": \"Thanks for the update and valuable feedback\", \"comment\": \"Dear Reviewer LRiM\\n\\nThank you very much for the valuable feedback and the update on the confidence score. We are happy to know that you found the response satisfactory. We have included a detailed discussion about the generalizability of COrAL in the Limitations and a more thorough comparison of the computation overhead of different decoding approaches in Appendix E. We hope you might view this as sufficient reason to further raise your score.\\n\\nBest,\\n\\nAuthors\"}", "{\"summary\": \"The paper proposes COrAL(Context-Wise Order-Agnostic Language Modeling), a novel architecture for language modeling that enhances efficiency in iterative refinement, aiming to reduce inference latency in large language models (LLMs). Traditional autoregressive models, which generate text sequentially, struggle with efficiency due to the natural linear time complexity in inference. COrAL incorporates iterative refinement directly into the model, allowing multi-token generation and backward reconstruction within manageable context windows. This order-agnostic approach enables simultaneous forward and backward decoding within sliding context windows, effectively accelerating inference and improving performance on reasoning tasks. Empirical tests show significant improvements in both accuracy and inference speed, demonstrating COrAL's promise in capturing diverse token dependencies without the high latency typical of AR models. However, challenges remain, such as reduced performance in code generation due to output consistency issues, indicating areas for further refinement.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"Improved Efficiency and Performance: COrAL\\u2019s order-agnostic framework allows simultaneous forward and backward processing, significantly reducing inference latency compared to traditional autoregressive models. Compared to the ablated baselines, empirical results on datasets like GSM8K and LogiQA demonstrate notable accuracy gains, confirming the model\\u2019s effectiveness in complex reasoning tasks.\", \"Scalably Adaptable from Existing Models: By using context-wise modeling and target-aware positional encoding, COrAL manages to enhance dependency capture without substantially increasing computational resources, making it feasible for deployment in large-scale applications, even with existing large language models with only minor adaptation.\"], \"weaknesses\": [\"Lack of survey of some (maybe kind of obsolete yet important) existing methods: This method resembles Scheduled Sampling in multiple aspects, yet it severely lacks the acknowledgement of this method (no citation nor even mentioning). It shares many ideas and practices with SS, necessitating a deeper analysis on the connection and differences between the method. For example, I'd recommend the authors to emphasize the capability of the proposed method on semi-parallel, refinitive generation, whereas SS was originally only proposed for improvements of performance in sequential generation.\", \"Lack of deeper discussion on the theoretical insights: I appreciate the authors' awesome work in presenting and delivering the empirical results, but I presume it would appeal the community more if some insightful conclusions can be presented alongside the experiment observations.\"], \"questions\": \"The clarity of the paper is good, it's easy for people to follow generally. I don't have further questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0JcPJ0CLbx
Revisiting MAE pre-training for 3D medical image segmentation
[ "Tassilo Wald", "Constantin Ulrich", "Stanislav Lukyanenko", "Andrei Goncharov", "Alberto Paderno", "Leander Maerkisch", "Paul F Jaeger", "Klaus Maier-Hein" ]
Self-Supervised Learning (SSL) presents an exciting opportunity to unlock the potential of vast, untapped clinical datasets, for various downstream applications that suffer from the scarcity of labeled data. While SSL has revolutionized fields like natural language processing and computer vision, their adoption in 3D medical image computing has been limited by three key pitfalls: Small pre-training dataset sizes, architectures inadequate for 3D medical image analysis, and insufficient evaluation practices. We address these issues by i) leveraging a large-scale dataset of 44k 3D brain MRI volumes and ii) using a Residual Encoder U-Net architecture within the state-of-the-art nnU-Net framework. iii) A robust development framework, incorporating 5 development and 8 testing brain MRI segmentation datasets, allowed performance-driven design decisions to optimize the simple concept of Masked Auto Encoders (MAEs) for 3D CNNs. The resulting model not only surpasses previous SSL methods but also outperforms the strong nnU-Net baseline by an average of approximately 3 Dice points. Furthermore, our model demonstrates exceptional stability, achieving the highest average rank of 2 out of 7 methods, compared to the second-best method’s mean rank of 3. Our code is made available here.
[ "self-supervised learning", "medical image segmentation", "foundation models", "medical image computing", "CNN", "nnU-Net" ]
https://openreview.net/pdf?id=0JcPJ0CLbx
https://openreview.net/forum?id=0JcPJ0CLbx
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ns2wWCI0TS", "igHoNiMuGi", "iF7oMf3Z38", "dnPb5GMEWt", "EpVvsW76br" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1729237575515, 1729897663226, 1730720679244, 1730613034574, 1731491225705 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7147/Reviewer_nA7J" ], [ "ICLR.cc/2025/Conference/Submission7147/Reviewer_s7sF" ], [ "ICLR.cc/2025/Conference/Submission7147/Reviewer_UFXL" ], [ "ICLR.cc/2025/Conference/Submission7147/Reviewer_bU68" ], [ "ICLR.cc/2025/Conference/Submission7147/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a framework based on self-supervised learning in which a large dataset of 3D brain MRI images is leveraged. The model resulting from this framework was fine-tuned and evaluated on various down-stream tasks, yielding segmentations more accurate than other state-of-the-art models such as nnUNet.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"This work identifies and tackles three issues regarding the evaluation of previous methods: small dataset size, inadequate backbones, and insufficient evaluation.\", \"The pretrained model was evaluated on several datasets with different down-stream segmentation tasks.\"], \"weaknesses\": [\"Very unclear and unorganized manuscript. I believe that it can be improved substantially. I specified many details in \\\"Suggestions\\\", including the following: previous related works were not described, unclear concepts are not introduced, parts of what should be the conclusion (e.g., that MAEs dominate, SSL pretraining works) are in the \\\"Results and discussion\\\" section, there is no section/subsection where the experiments are clearly described and instead they're mixed with \\\"results and discussion\\\". Another example of mixing: right before the conclusion, in only one paragraph (L508-516), we can find an experiment description, the results, and the discussion, all mixed together.\", \"Limited novelty.\", \"Limited methodological novelty. The framework is based on well-established Masked AutoEncoders \\\"with the recent adaptations introduced by Tian et al. (2023); Woo et al. (2023)\\\".\", \"Partially limited application novelty since the pretrained models are not publicly available. Although the code is shared, researchers may not have access to large datasets; L53 reads that there seems to be \\\"a public decrease in the community\\u2019s willingness to share data\\\" (I don't agree or disagree with this statement, but this may be only regarding brain MRI).\", \"In many cases, it is unclear if one approach is better than another because no standard deviations are shown. In other words, it cannot be understood whether a method achieving a 71.66 dice coefficient is actually better than another method achieving 71.35.\"], \"questions\": [\"## Questions\", \"L213: \\\"When masking the input image, CNNs are not able to ignore the masked regions in the same manner as transformers can.\\\" Can you elaborate on this? (I also suggest doing so in the paper). Why would you want to ignore the masked regions? My understanding is that, the model should learn how to reconstruct those regions.\", \"L249: \\\"a [3x3x3] convolution is applied to the feature maps at every resolution except the highest resolution to prepare the representations for decoding\\\". What do you mean by \\\"prepare\\\" here? why do they need to be \\\"prepared\\\"?\", \"Table 1. What does the underlining indicate?\", \"Table 2. What does it mean \\\"Tr. Stage\\\"? is it \\\"Fine-tuning stage\\\"?\", \"## Suggestions / Other comments\", \"The title generalizes to \\\"3D Medical Image segmentation\\\" but the experiments are only on brain MRI. I suggest specifying that in the title.\", \"In the abstract and introduction, the reader is introduced to the concept of \\\"development dataset\\\" (L20), which, to me, it wasn't clear until much later.\", \"The contributions listed in the introductions were in the form of \\\"we evade Pitfall X by doing Y\\\". I don't think these are contributions. A contribution is something that you provide to the community, e.g., a novel method, an application, an answer to a research question, etc.\", \"From the beginning of the paper it is advertised that the dataset size is 44k, although this number also includes the images that were discarded. The pretraining dataset size was 39k images, which is still quite large. I suggest saying that the dataset size was 39k and not 44k. Furthermore, the caption of Figure 1 reads \\\"During pretraining, we carefully develop our method with 44k images\\\" which seems to not be true; the dataset size is 39k.\", \"Figure 1. The \\\"testing\\\" shows a \\\"frozen\\\" icon, but as far as I understood, the models are partially fine-tuned on the \\\"test datasets\\\".\", \"Figure 1. It is unclear what \\\"underline\\\" means.\", \"There is no \\\"Previous work\\\" section. Although in the introduction a lot of previous work have been cited, it was mostly to highlight the deficiencies of that previous work and not to explain what were the previous methods about. \\\"Previous work\\\" also gives context to the paper, and it helps introducing the methods that you will later compare. The methods in Table 1 (VoCo, VF, MG) seem to come out of nowhere, and they're \\\"Previous related works\\\". Also, in Section 4.1 \\\"Observations\\\", it is written \\\"SSL schemes using the masked image modeling paradigm (MG, S3D-B, and S3D-L) consistently rank higher than the contrastive VoCo or the pseudo-segmentation-based VolumeFusion pre-training method for CNN pre-training\\\", but the reader has never been told that those previous related works were based on different strategies, which is very important to understand why those methods were chosen.\", \"To better illustrate the masking, I suggest including a figure where the reader can see how the input of the models looks like.\", \"Figure 2. The text and numbers are a bit hard to read. I suggest increasing their size.\", \"Typo in L266: \\\"Results are presented in Table Table 1b\\\"\", \"Typo in L269: \\\"(S3D-B=\\\"\", \"Typo in L204: \\\"betweem\\\"\", \"L306: \\\"MAEs are known to benefit from scaling.\\\". I suggested including a citation.\", \"I suggested having a separate section or subsection where the experiments and experimental settings are clearly defined.\", \"The first line of the conclusion reads: \\\"This work is the first to demonstrate the potential of properly configured MAEs in 3D medical image segmentation\\\". However, by googleing \\\"masked auto encoder medical image segmentation\\\" many works pop up (e.g., [1,2,3,4]), and since there was no \\\"previous related work\\\" section, it is not clear if this is really \\\"the first to demonstrate the potential of properly configured MAEs in 3D medical image segmentation\\\"\", \"[1]: Self Pre-training with Masked Autoencoders for Medical Image Classification and Segmentation. ISBI 2023.\", \"[2]. Masked Autoencoders for Unsupervised Anomaly Detection in Medical Images. Procedia Computer Science 2023.\", \"[3]. Advancing Volumetric Medical Image Segmentation via Global-Local Masked Autoencoder. Arxiv 2023\", \"[4]. Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging. Sci. Rep 2023.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors identified three key issues in 3D medical image computing and proposed corresponding solutions. They employed the Masked Auto Encoders (MAE) method for pre-training the model within the existing framework, achieving better performance compared to previous SSL methods.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The authors conducted a substantial number of experiments.\"], \"weaknesses\": [\"The paper resembles a technical report rather than an academic paper, lacking demonstration of its innovation. It addresses the problem by simply combining methods without discussing the essence of the issue. MAE has already proven its competitiveness in previous work, yet the paper merely applies MAE to the backbone without further exploration.\", \"The writing quality is poor, especially with the confusing use of symbols (e.g., the confusion of D[1\\\\~9], DS[1\\\\~9], and dataset names). The excessive use of items and textbf (too much bold text reduces readability) and quotes (all right quotes, which can be displayed correctly in $\\\\LaTeX$ using \\\\`word') makes the paper difficult to read.\", \"The paper lacks a symbolic representation and adequate explanation of the task setup and model, instead focusing extensively on dataset selection and hyperparameter choices.\", \"The figures are confusing. Figure 1 is hard to understand, appearing to mix development and testing in a workflow without showing the model pipeline. Figure 2 is poorly organized, with excessive whitespace and mixed use of pie and bar charts. Figure 3 seems to be generated directly by a program, lacking good organization and sufficient explanation, with a lot of meaningless whitespace.\", \"The paper lacks visualizations of some results.\", \"The experimental section only describes performance improvements and changes in tables without further discussion. The results show that the model does not achieve significant performance gains in many experiments (the large model size yields only slight improvements or none at all), suggesting that simply applying MAE does not produce sufficiently good results, and the authors do not propose better methods.\", \"From Table 1-a, it can be observed that model performance improves based on some sparsification adaptations, raising doubts about whether the results in Table 3 are achieved by stacking tricks rather than the method itself. Table 1-c shows no performance improvement from scaling, and Table 3 even shows performance degradation due to scaling, without explanation, which is disappointing for the method.\"], \"questions\": [\"Should the captions for the tables be placed above the tables instead of below?\", \"Should the writing issues and figure problems mentioned in the Weakness section be revised?\", \"Can an explanation be provided for the performance degradation observed with scaling up (Table 3)?\", \"Does the final model's performance degrade when sparsification adaptations are not used?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a SSL framework, called nnSSL, for 3D medical image segmentation based on a MAE strategy and thorough evaluation of various design choices. The paper pretrains on a dataset of 44K private MRI and designs a SSL framework using 5 public datasets and uses 7 public datasets for further evaluation and comparison to SOTA methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is a very timely and relevant contribution to the field of medical image segmentation where the use of self-supervised pretraining is still in its infancy. The paper is clearly written, and proposes a simple, yet effective framework for pretraining for 3D medical segmentation downstream tasks. The analysis of design choices contains valuable insights. The evaluation is thorough, and compares to the most important baseline methods. While not a novel method, getting the sparse convolution MAE to work in 3D is non-trivial, and making the implementation of this public is a sizeable contribution.\", \"weaknesses\": [\"(W1) The paper does not provide important details on how the weights are transfered for finetuning. Is finetuning performed in the nnUnet framework? Which augmentations are used when finetuning? Are the learning-rate, augmentations etc. fixed for all evaluation datasets? As noted by the authors, selecting an appropriate configuration for each dataset is important. I assume that the configuration is also dynamic for S3D, however the paper does not contain any mention of how this is achieved with pretrained weights.\", \"(W2) The authors use a patch size of 160^3 which is significantly larger than most previous works, however does not provide any ablations of the effect of this. The proposed performance gains therefore cannot be ruled out to be mainly from using a larger patch size.\", \"(W3) The paper lacks references to important related work. Specifically, the authors are suggested to include the following two articles in the related works section:\", \"SSL with a convolutional MAE for 3D MRI segmentation on a **public** dataset of 44K MRI scans, which similarly revisits various design choices for CNN pretraining, yet with inferior evaluation: [1]\", \"Implementation of Spark-like sparse masking for large-scale 3D medical image pretraining: [2]\", \"(W4) The notes on scaling and the S3D-L variant is misleading since it does not use a model of larger size, yet is scaled in other ways. This meaningfully departs from the established literature, and the authors are encouraged to find another way of communicating the different training setup. Scaling the model and data sizes are important ingredients in compound scaling, yet none of these are performed.\", \"(W5) The pretraining dataset is private and only limited information on the nature of this dataset is included. For reproducibility purposes, it would be beneficial for the community if the authors would release checkpoints trained on Brains-45K (similar size to the used dataset) from [1].\", \"(W6) The abstract mentions pretraining is on a dataset of 44K 3D MRI volumes, however the actual pretraining dataset is 39K volumes after filtering out low-quality data. This discrepancy is misleading.\"], \"references\": \"[1] Munk, Asbj\\u00f8rn, et al. \\\"AMAES: Augmented Masked Autoencoder Pretraining on Public Brain MRI Data for 3D-Native Segmentation.\\\"\\u00a0_arXiv preprint arXiv:2408.00640_\\u00a0(2024).\\n\\n[2] Tang, Fenghe, et al. \\\"Hyspark: Hybrid sparse masking for large scale medical image pre-training.\\\"\\u00a0_International Conference on Medical Image Computing and Computer-Assisted Intervention_. Cham: Springer Nature Switzerland, 2024.\", \"questions\": [\"How is the finetuning implemented? Does the finetuning use nnUnet or a nnUnet like framework?\", \"Will the authors release pretrained weights and results on public data, such as Brains-45K?\", \"The authors use a patch size of 160^3, however this is not standard by nnUNet. What is the performance improvement over using 128 or 96 standard in many previous works?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to benchmark various mask autoencoder (MAE) or masked image modeling (MIM) pretraining configurations to determine the optimal one for 3D medical image segmentation using CNN. It collected a large-scale MRI dataset containing around 40k scans for pretraining. The pre-trained model was then applied and evaluated on 13 datasets/tasks.\", \"soundness\": \"1. Given the current experiment setups, it is insufficient to conclude the optimal pretraining strategy. \\n* The patch size of MAE is a critical parameter, while Kaiming's MAE paper [1] did not ablate on that, some other studies ablated on that parameter and found significant performance differences [2-4]. This paper utilized a patch size of 5x5x5 in the bottleneck, equivalent to 32x32x32 in image space. It seems **too large** for the 3D MAE. Both [3] and [4] indicate in the 3D medical image, a high masking ratio and a small patch size are the key ([3] used a patch size of 16x16x16, [4] used 8x8x8). \\n* Regarding scaling of MAE pretraining, this paper only investigates having 8x batch size, larger learning rate, and 4x training iterations. Those are not keys to evaluating scaling. Scaling more refers to performance gain with an **increase in data size** and an **increase in model parameters**. On high pretraining iterations, the impact of larger batch size and learning rate may not be significant. Extending training iterations may also not help as the MAE training tends to saturate after prolonged training ([1] Fig 7 upper, 800 vs. 1600 epochs are very close, 84.9 vs. 85.1). So what will be really interesting to see is to ablate on 1. **training on 10%, 25%, 50%, 75%, 100% of 40k pretraining datasets**; 2. **varying model's depth to see how performance changes with model size**. In addition, the naming of S3D-L is very **misleading**, as -L always indicates a larger model (with more parameters) in the ML naming convention. \\n\\nThe above two reasons lead to a rating of soundness of 2, as without experiments on those two perspectives, it is hard to conclude the current manuscript presents the optimal strategy.\", \"presentation\": \"3\", \"contribution\": \"The reason for a rate of 2 in the contribution is that the current manuscript, entitled 'Revisiting MAE pre-training for 3D medical image segmentation', did not include any comparison with previous studies that utilized MAE pretraining for 3D medical image analysis, notably [3, 5]. Instead, it only involves comparisons with Model Genesis, Volume Fusion, and VoCo. \\n\\nThe contribution of the current study will be much higher if compared to the existing 3D MAE pretraining framework developed for medical images (i.e., [3,5]).\", \"strengths\": \"1. Benchmarking SSL pretraining strategy is absolutely important in all fields of AI, including medical vision.\\n2. This involves a large-scale pretraining dataset, ~40k brain scans. \\n3. The downstream evaluation sets are also diverse. \\n4. The presentation is easy to follow, but it certainly can be further improved.\", \"weaknesses\": \"The reviewer rated soundness as 2 and contribution as 2, given the following reasons:\", \"others\": [\"The quality of Fig. 2 can be improved.\", \"**Overall**, the reviewer recommends rejection because the technical flaws and a lack of comparison with existing 3D MAE frameworks (as presented above) outweigh the benefits brought by large-scale datasets and diverse downstream evaluations.\", \"[1]: He, Kaiming, et al. \\\"Masked autoencoders are scalable vision learners.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\", \"[2]: Xie, Zhenda, et al. \\\"Simmim: A simple framework for masked image modeling.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\", \"[3]: Chen, Zekai, et al. \\\"Masked image modeling advances 3d medical image analysis.\\\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023.\", \"[4]: Zhang, Xuzhe, et al. \\\"MAPSeg: Unified Unsupervised Domain Adaptation for Heterogeneous Medical Image Segmentation Based on 3D Masked Autoencoding and Pseudo-Labeling.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"[5]: Tang, Yucheng, et al. \\\"Self-supervised pre-training of swin transformers for 3d medical image analysis.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\"], \"questions\": \"What is the reason for excluding the existing 3D MAE SSL pretraining frameworks for medical images from Table 3?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Given the Scores and the Reviews, we withdraw the paper from ICLR and will revise it.\\nWe want to thank the reviewers for their time and their mostly constructive feedback. \\n\\n_Despite withdrawing, we believe some points of criticism are disputable and the following should be noted:_\\n\\n> Claim of first working MAE baseline without showing \\\"Self-supervised pre-training of swin transformers for 3d medical image analysis.\\\" and \\\"Masked image modeling advances 3d medical image analysis.\\\" are not working\\n\\nWe are certain that we are the first to show convincing results of MAE pre-training but agree that we should provide additional evidence of SwinUNETR and the other Transformer MAE Baseline being sub-par. We originally believed the known deficiencies of SwinUNETR and transformers in 3D medical image segmentation to be sufficient in itself, but will provide evidence in future versions. \\n\\n> Claiming to train on 44k volumes is misleading, as we filter down to 39k\\n\\nThis will be reworked in the future version. \\n\\n> Scaling should increase data and parameters\\n\\nOriginally this scaling was conducted to allow adaptation of the architecture on smaller consumer-GPUs as was stated in the manuscript. Depite this we agree that this scaling is suboptimal and that the naming convention is confusing. We will provide a better scaling scheme/paradigm and in the future.\\n\\n> Partially limited novelty as pre-trained models are not publicly available\\n\\nWe agree that public pre-trained weights would improve the contribution, hence we will provide pre-trained weights in a future version, created on the public 41k volume large ABCD dataset.\\n\\n> Missing reference to AMAES: Augmented Masked Autoencoder Pretraining on Public Brain MRI Data for 3D-Native Segmentation.\\n\\nWhile we would like to use this publicly available dataset, we want to denote that there is no simple way of obtaining it. Many singular data-usage requests need to be conducted, and singular datasets come with specific hurdles associated with use of their data. E.g. PPMI requires users to get papers administratively reviewer, `If I seek to publish manuscripts using data from PPMI, I agree to follow the guidelines established and written out in the PPMI Publications Policy, including sending manuscripts to the PPMI Data and Publications Committee (DPC) for administrative review.` https://ida.loni.usc.edu/collaboration/access/appLicense.jsp . Same goes for some datasets like OASIS-3, or ADNI: _\\\"If I publish manuscripts using data from ADNI, I agree to the following:\\n`On the by-line of the manuscript, after the named authors, I will include ADNI as an author\\nby using the phrase \\\"for the Alzheimer's Disease Neuroimaging Initiative*\\\" with the asterisk\\nreferring to the following statement and list of names\\\"` Which the original paper even violates https://arxiv.org/pdf/2408.00640 \\n\\nHaving said this, we want to thank all the reviewers again for their time and effort.\\nCheers\"}" ] }
0JOhLEf2bX
Proteome-wide prediction of mode of inheritance and molecular mechanism underlying genetic diseases using structural interactomics
[ "Ali Saadat", "Jacques Fellay" ]
Genetic diseases can be classified according to their modes of inheritance and their underlying molecular mechanisms. Autosomal dominant disorders often result from DNA variants that cause loss-of-function, gain-of-function, or dominant-negative effects, while autosomal recessive diseases are primarily linked to loss-of-function variants. In this study, we introduce a graph-of-graphs approach that leverages protein-protein interaction networks and high-resolution protein structures to predict the mode of inheritance of diseases caused by variants in autosomal genes, and to classify dominant-associated proteins based on their functional effect. Our approach integrates graph neural networks, structural interactomics and topological network features to provide proteome-wide predictions, thus offering a scalable method for understanding genetic disease mechanisms.
[ "Mode of inheritance", "Functional effect", "Genetic diseases mechanism", "Graph neural networks", "Graph-of-graphs", "Structural interactomics" ]
https://openreview.net/pdf?id=0JOhLEf2bX
https://openreview.net/forum?id=0JOhLEf2bX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wi6YKdxQbs", "jeWway56oy", "NXHOqlfVgF", "DhyXYgpvD0" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731579947407, 1730568732139, 1729324458203, 1730566694335 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3556/Authors" ], [ "ICLR.cc/2025/Conference/Submission3556/Reviewer_F49n" ], [ "ICLR.cc/2025/Conference/Submission3556/Reviewer_8LXJ" ], [ "ICLR.cc/2025/Conference/Submission3556/Reviewer_kSXk" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We would like to thank the reviewers for their feedbacks and comments.\"}", "{\"summary\": \"The authors present a methodology able to detect both mode of inheritance (MOI) of proteins encoded by autosomal genes and the functional effects of gene variants. The strategy relies on established architectures like GCN, GAT, and GIN, using protein-protein interaction (PPI) data for MOI prediction (node classification) and protein structures obtained by AlphaFold for function prediction (graph classification). The author compared their method with two established strategies, one for MOI prediction (LDA) and one for functional effect prediction (SVM). The results reported by the authors show better metrics for their methodology. To inspect the biological validity of their results, the authors performed an enrichment analysis and determined the most influential features for the predictions via XAI.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The main strengths of the paper are the following:\\n\\n1) The paper is well-written and easy to follow.\\n\\n2) The problems addressed in the paper are relevant\\n\\n3) It is extremely interesting to have a methodology able to address both MOI and functional effects prediction instead of needing to rely on two different strategies for the two tasks.\\n\\n4) The bioinformatics-related work and processing is accurate.\\n\\n5) The figures provided help convey the message of the authors more effectively.\", \"weaknesses\": \"The main weaknesses of the paper are the following:\\n\\n1) The authors did not provide any code. This impinges on the reproducibility and further evaluation of their methods and results.\\n\\n2) From the methodological point of view, there seems to be not much novelty. The authors use established architectures \\\"out-of-the-box\\\" to tackle the proposed tasks.\\n\\n3) It seems that the authors did not perform any parameter tuning on their models. Additionally, no information on the hyperparameters used in the model is provided. The authors state they use dropout and weight decay, but no value for those hyperparameters is shown.\\n\\n4) By reading the paragraph \\\"Training and evaluation,\\\" it seems that the authors split the dataset into just two sets and not into training, validation, and test sets. They probably only used training and test sets if they did not perform hyperparameter tuning.\\n\\n5) Regarding the explainability phase, IntegrateGradients was used to obtain global feature importance attributions by averaging the attributions of correctly predicted samples. I am not sure this is the correct approach to obtain global feature attributions. Leaving out the wrongly predicted samples from the averaging process may produce biased results. I suggest using global feature attribution methods instead. One example can be SAGE (Covert, Ian, Scott M. Lundberg, and Su-In Lee. \\\"Understanding global feature contributions with additive importance measures.\\\" Advances in Neural Information Processing Systems 33 (2020): 17212-17223), among others.\\n\\n6) Comparing against just one methodology per task (LDA and SVM) seems to me not enough to evaluate the performance of the strategy.\\n\\n7) Given the tasks are multiclass classifications on unbalanced datasets, showing the results in terms of precision, recall, and F1 only without specifying the type of averaging strategy (micro, macro) used or without providing a confusion matrix conveys too little information to really understand the accuracy of the method (in particular class-wise).\\n\\n8) The enrichment analysis reports only the enriched terms, but there are no links to the literature that confirm or better describe the association between the enriched terms and the proteins.\\n\\nOverall, given the strong bioinformatics focus, I believe that after some revisions the paper can be accepted in a more specialized venue/journal, but given the limited methodological contribution and the flaws/imprecisions in model training and evaluation, I am afraid the work is not ready for publication in a high-impact machine-learning-focused conference at its current state.\", \"questions\": \"In order to improve the paper, the authors could perform hyperparameter tuning and give more information on the hyperparams used. They could compare against a higher number of methodologies and provide better ways to convey the results (confusion matrices may help). Morevoer, a literature search to verify the enriched terms could be performed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper gives a framework to predict the mode of inheritance of diseases and classify dominant-associated proteins based on their functional effect. The biggest highlight is its use of a graph-of-graphs idea to combine the protein-protein interaction networks and high-resolution protein structure.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The graph of graphs idea to predict the mode of inheritance of diseases is novel.\\n\\nThe methods are described in great detail and with persuasive experiments.\", \"weaknesses\": \"My biggest concern is that although this work seems to be relevant for predicting mode of inheritance and classifying functional effects, its contribution to deep learning models in the application domain of biology is insufficient. It is well known that GCN, GIN,GAT are three very classical GNN models.\\n\\nAnd, the graph in graphs idea is also similar to the idea of the paper [1]. So, as far as ICLR is concerned, I think this may not be a notable paper for the community. \\n\\nAlso, I would suggest that the author modify the size of each figure to make the content and fonts in the figures look a little more harmonious.\\n\\n[1] Gao Z, Jiang C, Zhang J, et al. Hierarchical graph learning for protein\\u2013protein interaction[J]. Nature Communications, 2023, 14(1): 1093.\", \"questions\": \"Please see the Weaknesses part. Thank you!\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose an approach to predict the likelihood for a protein to result in a disease if a mutation occurs on one of the inherited copies using a graph neural networks method. They propose to use two scales to create a graph of graphs representation: at a protein level nodes are entire proteins and edges are interactions between proteins and at a residue level nodes are amino acids and edges are the type of bonds between these.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The integration of information at multiple scales is of interest.\", \"weaknesses\": \"The authors do not present a method capable to integrate information at various scales but rather work independently at each scale without exploiting any form of communication between scales.\", \"questions\": [\"the features for the nodes at each scale seem to be engineered and not learnable: is it true?\", \"couldn't one learn a graph encoding from the residue level and add it to the features at the protein interaction scale?\", \"all empirical results are reported without a notion of dispersion; is it possible to repeat the experiments to get a measure of variance to understand the significance of the results?\", \"when comparing multiple approaches could you use a critical diagram of differences (e.g. https://scikit-posthocs.readthedocs.io/en/latest/generated/scikit_posthocs.critical_difference_diagram.html)\", \"Page 8 lines 417: why are the results notable? what would the enrichment analysis of a random set of protein yield instead? how about a non-random baseline, e.g. a nearest neighbour predictor.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0IqriWHWYy
Watch Out!! Your Confidence Might be a Reason for Vulnerability
[ "Ayush Pandey", "Akshay Agarwal" ]
The tremendous success of deep neural networks (DNNs) in solving `any' complex computer vision task leaves no stone unturned for their deployment in the physical world. However, the concerns arise when natural adversarial corruptions might perturb the physical world in unconstrained images. It is widely known that these corruptions are inherently present in the environment and can fool DNNs. While the literature aims to provide safety to DNNs against these natural corruptions they have developed two forms of defenses: (i) detection of corrupted images and (ii) mitigation of corruptions. So far, very little work has been done to understand the reason behind the vulnerabilities of DNNs against such corruption. We assert that network confidence is an essential component and ask whether the higher it is, the better the decision of a network is or not. Moreover, we ask the question of whether this confidence itself is a reason for their vulnerability against corruption. We extensively study the correlation between the confidence of a model and its robustness in handling corruption. Through extensive experimental evaluation using multiple datasets and models, we found a significant connection between the confidence and robustness of a network.
[ "Confidence", "Robustness", "Natural Adversaries", "Object Recognition" ]
Reject
https://openreview.net/pdf?id=0IqriWHWYy
https://openreview.net/forum?id=0IqriWHWYy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zVQ1SYPerz", "zTnLvGSTtj", "zL4uCD1OFQ", "vyhiCFSI82", "vG2sssJmgs", "oW9OwFyXHZ", "nTxiYOZ92E", "dDEOlVTHV8", "aI6Nb7YYkT", "ZSYyygIsMO", "VsXHEWY0QQ", "T9tswtYyWc", "RSAj4TRjjb", "RDCMcL7yrH", "QyGlcQv0WD", "MGgd08L56g", "LJmZPobwqj", "IQeVgG1Vvy", "HOjYHAKw4A", "8YJHvLc8Pj", "6ruEHZBEMo", "5ABDHDZ2a3", "4X6gY0dtkz", "3wwMtKSsYG", "3THmVPzwxF", "2Kjg6Dcwfr" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1730767859140, 1733207283225, 1733226044655, 1733197459816, 1733226367457, 1733124984874, 1732976558321, 1733221762494, 1733221927772, 1730613622979, 1732973781836, 1730602668456, 1732974373590, 1733034338874, 1732976043474, 1732982705557, 1733199171663, 1732976176846, 1734593356474, 1732976747667, 1730253841208, 1732995840809, 1733207116092, 1733222808701, 1737523761207, 1733241838991 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6310/Reviewer_iz8w" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Reviewer_qqpm" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Reviewer_gZH2" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Reviewer_obon" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Reviewer_obon" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Area_Chair_Jfvm" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Reviewer_qqpm" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ], [ "ICLR.cc/2025/Conference/Submission6310/Reviewer_obon" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6310/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes using Stochastic Weight Averaging Gaussian (SWAG) as a method for calibrating neural networks, aiming to improve their performance and robustness against natural corruptions. The approach leverages SWAG's capacity to model uncertainty and enhance prediction reliability, asserting that better-calibrated confidence scores contribute to robustness in challenging real-world conditions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper presents a detailed investigation into the impact of calibration on model robustness, especially under naturally occurring corruptions. By systematically exploring the role of confidence in model predictions, the authors contribute to understanding the relationship between calibration and robustness, reinforcing SWAG\\u2019s potential in addressing natural corruption without introducing additional computational burden associated with adversarial training.\", \"weaknesses\": \"1. Limited Novelty: The paper largely relies on the established SWAG technique without introducing new calibration methods or adaptations specific to the architecture or the problem of natural corruption.\\n\\n2. Experimental Scope: The experimental evaluations are confined to small datasets (CIFAR-10 and CIFAR-100) and convolutional architectures like VGG and ResNet, lacking analysis on larger-scale datasets and modern architectures like Transformers.\", \"questions\": \"1. Applicability Across Architectures: The proposed method seems tailored primarily for convolutional neural networks (CNNs). A major gap lies in assessing how well SWAG might generalize to other architectures, such as Transformers, which have become prevalent in vision tasks. Expanding the discussion on generalizability or including Transformers in the experimental setup could enhance the study's relevance and adaptability to current deep learning trends.\\n\\n2. Novelty in Approach: While the study reinforces known concepts around calibration and robustness, these insights are not novel, particularly within the Bayesian deep learning community, where calibration\\u2019s role in improving robustness under adversarial scenarios is well-understood [refA, refB]. This limits the paper's contribution, as it primarily confirms existing knowledge rather than pushing the boundaries with a novel calibration approach. Introducing an innovative calibration technique, or a modified variant of SWAG tailored for robustness, would provide a more substantial contribution.\\n\\n3. Experimental Limitations: The experiments focus on CIFAR datasets and CNNs, which are both limited in size and scope. A broader evaluation involving larger datasets like ImageNet, and a wider range of architectures, including Transformer-based models, would offer a stronger validation of SWAG's effectiveness. This could also strengthen the paper\\u2019s generalizability claims and its relevance for real-world deployment.\\n\\nReferences\\n\\n[refA] Wicker, Matthew, et al. \\\"Bayesian inference with certifiable adversarial robustness.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2021.\\n\\n[refB] Stutz, David, Matthias Hein, and Bernt Schiele. \\\"Confidence-calibrated adversarial training: Generalizing to unseen attacks.\\\" International Conference on Machine Learning. PMLR, 2020.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"We want to thank the reviewer for raising the rating and giving us another chance to address the concern.\\n\\n**PGD parameters:** We are now working with different parameters of PGD, while looking at the trend on other perturbations, we are hopeful that can tackled like other perturbations by our model. We will post the findings as soon as we get them, positively before the deadline.\\n\\nFurther to enhance the novelty of our work, we provided the following response:\\n\\n---------------------------------------\\n**Extension and Novelty:** We have already begun to expand and enhance the methodology to address the limitation of SWAG. One key limitation of SWAG, which we acknowledge, is its unimodal nature. By approximating the posterior distribution as a single Gaussian, SWAG inherently struggles to capture more complex uncertainty landscapes, particularly in cases where the true weight distribution may exhibit multimodality or other non-Gaussian characteristics. Recognizing this constraint, we are actively working on extending the framework to incorporate more flexible and expressive distributions.\\n\\nSpecifically, we are exploring the use of mixtures of Gaussians to model multimodal posterior distributions. This extension allows the framework to represent multiple modes in the uncertainty space, thereby providing a richer understanding of model uncertainty. Sampling from this mixture of Gaussians implicitly generates an ensemble of models, each corresponding to weights from different modes of the loss landscape, which will improve the generalization. \\n\\n**Our preliminary results using the VGG model on the CIFAR-10 dataset showcase the improvement of at least 5\\\\% across corruptions.** As said, we are actively working in this direction to improve SWAG and are committed to adding these findings to the camera-ready paper.\\n\\nWe believe that these extensions will significantly improve the ability of the framework to address overconfidence and provide a more robust approach to uncertainty modeling.\"}", "{\"title\": \"Response\", \"comment\": \"Yes, PGD has a higher attack success rate than the FGSM which can be seen from lower classification accuracy on the PGD attack as compared to the FGSM attack.\\n\\nFor example, when the FGSM attack applied, the network yield 27.17\\\\% accuracy as compared to the value of 14.24\\\\% under PGD attack. Here the networks are trained with SGD. SWAG is found effective in handling iterative PGD attack better than SGD. **So far, the vulnerability of SWAG against any adversarial attack is not explored to the best of our knowledge, hence, general assumption of PGD yielding higher success might not be true.** We will expand this observation in the camera ready paper.\\n\\n2. The implementation is correct as we are using the benchmark library to implement the attack. We are further verifying the implementation but we have already verified a couple of times and found no error.\\n\\n3. We are further experimenting with varying iterations and step size. Although, we believe our results shows tremendous clues regarding the effectiveness in handling corruption and adversarial perturbations.\\n\\nThanks\"}", "{\"comment\": \"I appreciate the author's effort during the rebuttal period. The response has addressed some of my concerns, e.g., the performance on larger dataset ImageNet-C and other technical details. I will raise my score.\\n\\nHowever, after reading the response, the technical contribution of this paper remains to be limited somehow. I agree that this paper has pointed out a novel overconfidence phenomenon of classification models, but it applies an existing method as mitigation. In addition, the theoretical analysis is more conceptual rather than strict. \\n\\nI hope the authors can further modify the manuscript as stated.\"}", "{\"title\": \"New Adversarial Results\", \"comment\": \"Table 3: Accuracy with different perturbation values under PGD attack using CIFAR-10 dataset and VGG network. Here the 50 iterations are used to perform the attack.\\n\\n| **Epsilon (Max Perturbation)** | **SWAG** | **SGD** |\\n|--------------------------------|-----------|----------|\\n| 1/255 | 57.67% | 42.21% |\\n| 2/255 | 51.56% | 38.92% |\\n| 3/255 | 47.24% | 34.78% |\\n| 4/255 | 43.59% | 31.37% |\\n| 5/255 | 41.65% | 27.16% |\\n| 6/255 | 38.21% | 21.19% |\\n| 7/255 | 34.12% | 15.24% |\\n| 8/255 | 26.23% | 9.64% |\\n\\nAfter increasing the iterations, we still observed the resilience of SWAG compared to SGD in handling PGD attack. We will keep experimenting and add all possible analysis in the camera ready paper.\\n\\n**While we wish to perform all the experiments quickly, a limited computational power is a concern and hence request reviewer to see the trend which is clear for different variations of attacks.**\\n\\n**We hope that we have successfully resolved majority of the concerns. Looking forward for the acknowledgement on the interesting observations resulted from this first ever paper.**\"}", "{\"title\": \"Awaiting Reviewer's Acknowledgement\", \"comment\": \"We want to again thank each reviewer for providing the valuable feedback which significantly improved our finding and can pave a way in developing robust system which can be deployed in the real world.\\n\\nWe are actively looking forward to hearing back from the reviewers and meta reviewers in acknowledging our detailed response to each comment raised and would be happy to resolve any remaining comments.\\n\\nThanks\"}", "{\"title\": \"Response\", \"comment\": \"**Novelty**\\n\\nWe acknowledge that while SWAG in its raw form is not our contribution; as pointed out by other reviewers, this paper for the first time highlights the key issue: the overconfidence of SGD-trained CNN models and their reduced robustness across various noise and corruption scenarios. It is to be noted here the fact that SGD is one of the most popular optimizers in training any deep models including large models, convolutional models, and transformer architectures. Therefore, understanding its role in making the networks sensitive to corruption itself is a significant contribution. Further, how are there other optimizers that are not explicitly aimed at corruption, can they help in mitigating corruption impact from the angle of network confidence in image classification? As said, this is the first work to explore overconfidence as a key factor underlying the vulnerability of deep neural networks (DNNs). The proposed research aims in that direction and aims to provide a foundational understanding of how overconfidence impacts model robustness, particularly in the presence of natural corruption. **This work lays the benchmark for future research aimed at developing defense algorithms to tackle natural corruptions by knowing the reason why their defense can fail or developing novel calibration techniques with a focus on natural corruptions.**\\n\\n*Our primary novelty lies in:*\\n\\n**Highlighting Overconfidence in CNNs:** We systematically analyze and benchmark how CNN architectures, when trained with SGD, exhibit overconfidence in their predictions under corrupted datasets. This issue significantly impacts their robustness and has not been thoroughly explored in the context of corrupted data.\\n\\n**Reliability Diagrams and Analysis:** We provided detailed reliability diagrams to visualize and quantify the overconfidence of models. Importantly, the generation of these diagrams and the insights derived from them are not tied to SWAG and can be applied independently.\\n\\n**Benchmark for Corruptions:** Our study serves as a benchmark for understanding and addressing model overconfidence in corrupted datasets, which can guide future research in robustness and calibration.\\nOur primary concern and contribution lie in analyzing the behavior of CNNs under corruption and highlighting their limitations in terms of overconfidence and reliability.\\n\\n**Regarding Larger Datasets and Advanced Models**\\n\\n**Models:** We acknowledge the initial limitations in the scope of our evaluations. In response to your concerns: Models: We acknowledge the importance of modern architectures, such as Transformers, in benchmarking robustness. While we are actively working to extend our analysis to include such architectures, our current focus on convolutional architectures like VGG and ResNet provides a solid foundation for understanding overconfidence in models. These architectures remain widely used and serve as a meaningful starting point for this research. Preliminary results on Transformer-based architectures show trends similar to those observed in CNNs (e.g., VGG, ResNet), highlighting the generality of our findings across modern architectures. This extension, while ongoing, provides additional support for our contributions.\\n\\n**Datasets:** We have expanded our analysis to include results on the ImageNet-C dataset, which serves as a more comprehensive and challenging benchmark for assessing model robustness under various corruptions. These experiments reinforce our hypothesis that SGD-trained models, regardless of their architecture, tend to exhibit significant overconfidence when exposed to noisy or corrupted inputs. For example, the clean ImageNet accuracy of ResNet-50 improves from 82% with SGD to 91% with SWAG. When tested under Brightness noise, the accuracy of the SGD-trained model drops drastically to 15%, whereas the SWAG-trained model achieves a considerably higher accuracy of 47%, demonstrating its robustness to such perturbations. These findings highlight the effectiveness of SWAG in mitigating overconfidence and improving robustness in the presence of input corruption.\"}", "{\"title\": \"Adversarial Attack Analysis\", \"comment\": \"Thanks again for your comments. As mentioned earlier and asserted that the proposed approach can provide defense against adversarial attacks as well, we have extensive experiments with different adversarial parameters. The results of two popular adversarial attacks under varying perturbation norms are reported below.\", \"table_1\": \"Accuracy with different perturbation values under PGD attack using CIFAR-10 dataset and VGG network.\\n\\n| Epsilon (Max Perturbation) | SWAG | SGD |\\n|----------------------------|---------|--------|\\n| 1/255 | **75.68**% | 45.21% |\\n| 2/255 | 72.50% | 39.50% |\\n| 3/255 | 68.74% | 34.89% |\\n| 4/255 | 62.08% | 29.08% |\\n| 5/255 | 57.56% | 24.37% |\\n| 6/255 | 52.67% | 19.12% |\\n| 7/255 | 46.87% | 17.54% |\\n| 8/255 | **41.56**% | 14.24% |\", \"table_2\": \"Accuracy with different perturbation values under FGSM attack using CIFAR-10 dataset and VGG network.\\n\\n| Epsilon (Max Perturbation) | SWAG | SGD |\\n|----------------------------|---------|--------|\\n| 1/255 | **61.24**% | 46.78% |\\n| 2/255 | 55.89% | 41.24% |\\n| 3/255 | 51.24% | 37.89% |\\n| 4/255 | 47.89% | 35.29% |\\n| 5/255 | 44.36% | 32.56% |\\n| 6/255 | 41.15% | 30.21% |\\n| 7/255 | 38.19% | 29.43% |\\n| 8/255 | **35.24**% | 27.17% |\\n\\n**From the results it can be observed that the SWAG model is not only effective in handling common corruptions but also the adversarial perturbation with a significantly higher margin than SGD.** We belive such universality and extensive analysis can help in building a **universal** defense architecture.\\n\\nWe hope all these new results address the concerns of the reviewer and look forward to the upgrade to the rating of the paper.\"}", "{\"title\": \"Adversarial Attack Analysis and Novel Extensions\", \"comment\": \"Thanks again for your comments. As mentioned earlier and asserted that the proposed approach can provide defense against adversarial attacks as well, we have extensive experiments with different adversarial parameters. The results of two popular adversarial attacks under varying perturbation norms are reported below.\", \"table_1\": \"Accuracy with different perturbation values under PGD attack using CIFAR-10 dataset and VGG network.\\n\\n| Epsilon (Max Perturbation) | SWAG | SGD |\\n|----------------------------|---------|--------|\\n| 1/255 | **75.68**% | 45.21% |\\n| 2/255 | 72.50% | 39.50% |\\n| 3/255 | 68.74% | 34.89% |\\n| 4/255 | 62.08% | 29.08% |\\n| 5/255 | 57.56% | 24.37% |\\n| 6/255 | 52.67% | 19.12% |\\n| 7/255 | 46.87% | 17.54% |\\n| 8/255 | **41.56**% | 14.24% |\", \"table_2\": \"Accuracy with different perturbation values under FGSM attack using CIFAR-10 dataset and VGG network.\\n\\n| Epsilon (Max Perturbation) | SWAG | SGD |\\n|----------------------------|---------|--------|\\n| 1/255 | **61.24**% | 46.78% |\\n| 2/255 | 55.89% | 41.24% |\\n| 3/255 | 51.24% | 37.89% |\\n| 4/255 | 47.89% | 35.29% |\\n| 5/255 | 44.36% | 32.56% |\\n| 6/255 | 41.15% | 30.21% |\\n| 7/255 | 38.19% | 29.43% |\\n| 8/255 | **35.24**% | 27.17% |\\n\\n**From the results it can be observed that the SWAG model is not only effective in handling common corruptions but also the adversarial perturbation with a significantly higher margin than SGD.** We believe such universality and extensive analysis can help in building a **universal** defense architecture.\\n\\n---------------------------------------\\n**Extension and Novelty:** We have already begun to expand and enhance the methodology to address the limitation of SWAG. One key limitation of SWAG, which we acknowledge, is its unimodal nature. By approximating the posterior distribution as a single Gaussian, SWAG inherently struggles to capture more complex uncertainty landscapes, particularly in cases where the true weight distribution may exhibit multimodality or other non-Gaussian characteristics. Recognizing this constraint, we are actively working on extending the framework to incorporate more flexible and expressive distributions.\\n\\nSpecifically, we are exploring the use of mixtures of Gaussians to model multimodal posterior distributions. This extension allows the framework to represent multiple modes in the uncertainty space, thereby providing a richer understanding of model uncertainty. Sampling from this mixture of Gaussians implicitly generates an ensemble of models, each corresponding to weights from different modes of the loss landscape, which will improve the generalization. \\n\\n**Our preliminary results using the VGG model on the CIFAR-10 dataset showcase the improvement of at least 5\\\\% across corruptions.** As said, we are actively working in this direction to improve SWAG and are committed to adding these findings to the camera-ready paper.\\n\\nWe believe that these extensions will significantly improve the ability of the framework to address overconfidence and provide a more robust approach to uncertainty modeling.\\n\\n**We hope all these new results address the concerns of the reviewers and look forward to the upgrade to the rating of the paper.**\"}", "{\"summary\": \"The paper explores the challenges DNNs face from natural adversarial corruptions, which can undermine their robustness. While past work has focused on detecting and mitigating these corruptions, this study examines whether a model\\u2019s confidence may contribute to its vulnerability.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. It explores the vulnerability of DNNs from the perspective of model overconfidence.\\n2. The article is well-structured and relatively clear in its presentation.\", \"weaknesses\": \"1. What exactly is the novelty of this paper? SWAG is not your contribution; merely using it to derive some results for analysis does not suffice.\\n2. The phenomenon of model overconfidence appears to be only a description in your paper. Do you have specific examples or experimental results to substantiate this claim?\\n3. Is your method limited to CNN architectures? Given the prevalence of transformer-based models, a method solely applicable to CNNs may have limited relevance, and it appears you tested on a very small set of CNN models.\\n4. Based solely on the text, I cannot appreciate the superiority of your method. Please provide comparative experiments with adversarial training methods, covering dimensions such as effectiveness and cost. Furthermore, does your method apply only to natural corruptions? How would it perform against adversarial samples?\\n5. The experiments lack depth: (1) In terms of models, this paper tests only on VGG-16 and ResNet, which seems rather limited. Where are the tests on more advanced models? (2) In terms of datasets, you only used CIFAR-10 and CIFAR-100, yet experiments on ImageNet are also necessary.\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Global Responses and Update in Manuscript\", \"comment\": \"First, we want to thank each reviewer for providing constructive comments and highlighting that this is the first work aiming to provide an explainability in understanding the vulnerability of deep models against natural (common) corruption. It is observed from the literature that several efforts have been started to mitigate the impact of corruption or detect the images as clean or corrupted but no research effort has been made to highlight the reason for its sensitivity in the first place itself. We assert that such interpretability can help build a better and more robust model than later developing one model (extra, and incur additional cost, probably heavy to deploy on computationally limited devices) that can mitigate the impact of corruption or detect them which will further require the purification of corruption.\\n\\nTo further strengthen our findings, we have performed experiments with large-scale ImageNet corruption datasets, adversarial noises, and analysis with different training optimizers.\\n\\n**Updates in Manuscript:** The modifications can be found on pages 8-9 under section 4.1.4 and in Figures 4 and 5.\\n\\n**Results on ImageNet-C:**\\n\\nTo address the need for evaluating the proposed method on a broader and more challenging benchmark, we have included experiments on the ImageNet-C dataset. These results align with our observations on CIFAR-10 and CIFAR-100, further demonstrating that SGD-trained models exhibit significant overconfidence when exposed to natural corruption. Additionally, SWAG-trained models consistently outperform SGD in terms of accuracy and robustness under these corruptions, reinforcing the generality of our findings across diverse datasets.\\n\\n**Analysis with Different Optimizers:**\\n\\nIn response to the suggestion to explore the impact of different optimizers, we extended our experiments to include Adam, alongside SGD. The results reveal that while SGD and ADAM optimizers show varying degrees of overconfidence under natural corruptions, the patterns of improvement achieved by SWAG remain consistent across these optimizers. This highlights the adaptability of SWAG in addressing overconfidence issues, regardless of the underlying optimizer.\\n\\n**Analysis with different adversarial perturbations are also provided as the responses to individual reviewers who asked for such observations.**\\n\\n**Results on Transformers** Our preliminary findings reveals that the transformer models are equally vulnerable to common corruption and overconfidence through SGD is a primary reason. To verify that, have performed experiments with SWAG and observed significant reductions in overconfidence and boost in classification performed on each dataset including ImageNet-C.\\n\\n**Comparison with Adversarial Training (AT):** AT is one of the strongest defense against adversarial perturbation; however, its effectiveness against corruptions are not adequately studied. Although, based on the suggestion of the reviewers, we have performed the comparison of the proposed work with AT. The comparison can be performed on atleast three following perspectives: (i) computational cost, (ii) accuracy on clean images, and (iii) handling of corruptions. The proposed SWAG model is found computationally lighter as compared to AT in terma of training time. For example, the PGD AT model on the CIFAR-10 dataset took approximately 250 minutes; whereas, the computational time of SWAG is 170 minutes on the similar GPU machine. Further, as well known, the AT show significant reduction in clean accuracy. The proposed model not only maintain clean accuracy but even improve it as compared to traditional SGD trained models. The similar effectiveness and strength can be seen in handling corruption, where, the accuracy of the proposed model is atleast 15\\\\% better than the PGD AT model.\\n\\n**Since now the upload of the revised pdf is not possible, we aim to add all these new findings in the camera ready paper**. We are hopeful that we addressed all the comments of the reviewers and hence looking forward for an updated rating and *acceptance of such a critical work which can pave a way in developing a secure deep learning era of models*. We will be happy to address any remaining comments. Thanks\"}", "{\"summary\": \"This paper investigates the vulnerability of deep neural networks (DNNs) when facing natural corruptions (such as noise, blur, etc.) and proposes that the model's confidence could be an important factor contributing to this vulnerability. Experiments demonstrate a significant correlation between a model\\u2019s confidence and its robustness in handling corruption. The study primarily focuses on calibrating model confidence and employs the Stochastic Weight Averaging Gaussian (SWAG) method to enhance model robustness.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"Proposing that high confidence might lead to model vulnerability in naturally corrupted environments is a novel perspective, differing from traditional defense methods.\", \"weaknesses\": \"1. This paper only conducts experiments on convolutional neural networks (CNNs), lacking tests on other network architectures such as ViT-B/16 or DeiT, to validate the conclusions about confidence and robustness across different model types. This would provide a more comprehensive demonstration of the method's applicability and effectiveness.\\n\\n2. This paper validates the robustness of the model to natural corruptions and its relationship with confidence using CIFAR-10 and CIFAR-100 datasets. However, the complexity of these datasets is relatively low, making it difficult to fully reflect the model's performance in real-world complex scenarios. It is recommended to conduct further experiments on more challenging datasets such as ImageNet-C or ImageNet-A, which include a broader range of corruptions (e.g., Gaussian noise, motion blur, weather-induced degradation, digital transformations) and better reflect the diversity and complexity of real-world applications. \\n\\n3. The paper mainly focuses on confidence calibration without an in-depth comparison with other advanced defense methods (such as adversarial training), which may weaken the practical applicability of this approach.\", \"questions\": \"Although the proposed method addresses natural corruption, its effectiveness against gradient-based adversarial attacks remains unclear. It is recommended that the authors conduct experiments involving FGSM, PGD, and C&W attacks to evaluate the method's performance under adversarial attacks. For example, performance under different noise magnitudes and different numbers of attack iterations could be assessed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses\", \"comment\": \"**Novelty:** While we can say that the paper does not have a technical contribution as per se; however, as pointed out by other reviewers (e.g., Reviewer obon03) this is the first work to explore overconfidence as a key factor underlying the vulnerability of deep neural networks (DNNs). We assert that since the deep networks are fundamentally vulnerable to natural corruption, developing the defenses *without finding out the reason is merely providing a false sense of security*. The proposed research aims in that direction and aims to provide a foundational understanding of how overconfidence impacts model robustness, particularly in the presence of natural corruption. **This work lays the benchmark for future research aimed at developing defense algorithms to tackle natural corruptions by knowing the reason why their defense can fail or developing novel calibration techniques with a focus on natural corruptions.**\\n\\nFurther, while our work builds upon SWAG, we have introduced novel elements and experiments aimed at exploring model confidence, particularly in the context of corrupted images. The traditional SWAG paper has not studied its impact on natural corruption and how the trained using it will behave under natural and adversarial corruption. Furthermore, unlike most existing literature, which primarily focuses on mitigating corruption or distinguishing corrupted images from clean ones, our analysis takes a different perspective. We investigate the underlying reasons for the reduced accuracy of CNN models when exposed to noise, framing this issue through the lens of overconfidence. This novel viewpoint provides fresh insights into the limitations of CNNs under noisy conditions and contributes to a deeper understanding of model robustness and calibration. \\n\\n**To further address comments, we want to highlight the following contributions:**\\n\\n**Optimizer Exploration:** Unlike the original SWAG implementation, we experimented with additional optimizers, such as Adam, to evaluate their effect on model performance and confidence under corruption. This extension offers insights into the adaptability of SWAG beyond its original formulation.\\n\\n**Novelty in Addressing Overconfidence:** To the best of our knowledge, this is the first work to directly investigate and quantify the issue of model overconfidence when exposed to corruption in the input data. Our findings not only highlight the severity of overconfidence in these scenarios but also serve as a benchmark for future research in mitigating this issue.\\n\\n**Benchmark for Corruption Noise:** By demonstrating the overconfidence phenomenon with corrupted inputs, our paper provides a baseline for evaluating models' susceptibility to noise-induced errors. We believe this contribution is essential for understanding model robustness and calibration.\\n\\n2. We have expanded our analysis to address this concern. Specifically, we conducted experiments on the ImageNet-C dataset, which features a diverse range of corruption types, to evaluate the robustness of our approach on a larger-scale dataset. These experiments further validate our findings on the overconfidence issue in models exposed to corrupted inputs.\\n\\nAdditionally, we acknowledge the importance of modern architectures, such as Transformers, in benchmarking robustness. While we are actively working to extend our analysis to include such architectures, our preliminary results on Transformer-based architectures show trends similar to those observed in CNNs (e.g., VGG, ResNet), highlighting the generality of our findings across modern architectures. This extension, while ongoing, provides additional support for our contributions.\"}", "{\"title\": \"Response (Part-2): Theoritical Impact\", \"comment\": \"1. It is observed when the networks are trained using the SWAG optimization, the loss optima becomes wider which we assert helps the model in achieving robustness to noises. The robustness can be observed from the better performance of the SWAG models on each common corruption significantly improving performance from traditional SGD models. Below, we outline the underlying principles contributing to this robustness:\\n\\n**Wide Minima in the Loss Landscape:** \\nSWAG optimizes the model by converging to wide, flat minima in the loss landscape (Averaging Weights Leads to Wider Optima and Better Generalization, UAI 2018), which are inherently more robust to perturbations. Wide minima ensure that small variations in the input (e.g., noise or corruptions) do not significantly affect the model\\u2019s predictions, as the model parameters are less sensitive to such changes.\\n\\n**Gaussian Weight Averaging:** \\nBy averaging weights sampled from a Gaussian posterior, SWAG (A Simple Baseline for Bayesian Uncertainty in Deep Learning, NeurIPS 2019) effectively captures a diverse range of plausible solutions. This diversity in the weight space enhances the model's generalization ability and reduces sensitivity to input corruptions, as predictions are averaged across multiple configurations. Additionally, as highlighted in the original SWAG paper, it demonstrates a strong ability to approximate the posterior distribution, enabling thorough exploration of the weight space. This capability contributes significantly to the robustness and adaptability of the model under various challenging scenarios.\\n\\n**Reduced Overconfidence:** SGD-trained models tend to converge to sharp minima, which are characterized by overconfidence and poor generalization to unseen or corrupted data. SWAG mitigates this issue by regularizing the solution space, resulting in more calibrated predictions that are less likely to be overly confident under corruption.\\n\\n**Theoretical Connection to Robustness:** The robustness of SWAG can be linked to the geometry of the loss surface: Wide minima are associated with low-curvature regions of the loss surface, which are less affected by noise or distributional shifts. Conversely, sharp minima found by SGD are high-curvature solutions, making them highly sensitive to even minor perturbations in the input.\\n\\n**Intrinsic Variability in Weight Distributions:** SWAG samples weights from a posterior distribution rather than relying on a single point estimate (as in SGD). This stochasticity introduces resilience by implicitly accounting for uncertainty in the model parameters, making it more robust to corrupted or noisy inputs.\\n\\n\\n2. **Predictions:** The predictions from the model on the test set are divided into bins based on the confidence score (maximum predicted probability for each prediction). In this case, the data is likely split into 20 bins. For each bin, calculate the average accuracy by checking the actual outcomes of the predictions in that bin. Calculate the average confidence by taking the mean of the confidence scores in that bin. For each bin, the difference between the average confidence and the actual accuracy is plotted on the y-axis, while the x-axis represents the confidence level. The above implementation is aligned to the work proposed in SWAG (A Simple Baseline for Bayesian Uncertainty in Deep Learning, NeurIPS 2019).\"}", "{\"title\": \"Response (Part-1)\", \"comment\": \"We acknowledge that while SWAG in its raw form is not our contribution; as pointed out by other reviewers, this paper for the first time highlights the key issue: the overconfidence of SGD-trained CNN models and their reduced robustness across various noise and corruption scenarios. It is to be noted here the fact that SGD is one of the most popular optimizers in training any deep models including large models, convolutional models, and transformer architectures. Therefore, understanding its role in making the networks sensitive to corruption itself is a significant contribution. Further, how are there other optimizers that are not explicitly aimed at corruption, can they help in mitigating corruption impact from the angle of network confidence in image classification? As said, this is the first work to explore overconfidence as a key factor underlying the vulnerability of deep neural networks (DNNs). The proposed research aims in that direction and aims to provide a foundational understanding of how overconfidence impacts model robustness, particularly in the presence of natural corruption. **This work lays the benchmark for future research aimed at developing defense algorithms to tackle natural corruptions by knowing the reason why their defense can fail or developing novel calibration techniques with a focus on natural corruptions.**\\n\\n*Our primary novelty lies in:*\\n\\n**Highlighting Overconfidence in CNNs:** We systematically analyze and benchmark how CNN architectures, when trained with SGD, exhibit overconfidence in their predictions under corrupted datasets. This issue significantly impacts their robustness and has not been thoroughly explored in the context of corrupted data.\\n\\n**Reliability Diagrams and Analysis:** We provided detailed reliability diagrams to visualize and quantify the overconfidence of models. Importantly, the generation of these diagrams and the insights derived from them are not tied to SWAG and can be applied independently.\\n\\n**Benchmark for Corruptions:** Our study serves as a benchmark for understanding and addressing model overconfidence in corrupted datasets, which can guide future research in robustness and calibration.\\nOur primary concern and contribution lie in analyzing the behavior of CNNs under corruption and highlighting their limitations in terms of overconfidence and reliability.\\n\\n2. The issue of model overconfidence is well-documented and supported by various studies. For example:\\n\\nA. Mitigating Neural Network Overconfidence with Logit Normalization, Proceedings of the 39th International Conference on Machine Learning, PMLR 162:23631-23644, 2022.\\n\\nB. Rethinking Calibration of Deep Neural Networks: Do Not BeAfraid of Overconfidence, Advances in Neural Information Processing \\nSystems 34 (NeurIPS 2021)\\n\\nC. Confidence-Aware Learning for Deep Neural Networks, ICML'20: Proceedings of the 37th International Conference on Machine Learning\\n\\nIt is to note here that while these studies aim to talk about confidence in a network, they do not aim to understand such confidence or overconfidence is a primary factor of natural corruption sensitivity.\\n\\n3. Additionally, we acknowledge the importance of modern architectures, such as Transformers, in benchmarking robustness. While we are actively working to extend our analysis to include such architectures, our preliminary results on Transformer-based architectures show trends similar to those observed in CNNs (e.g., VGG, ResNet), highlighting the generality of our findings across modern architectures. This extension, while ongoing, provides additional support for our contributions.\\n\\n4. We acknowledge the importance of conducting comparative experiments with adversarial training methods, as they provide valuable insights into the balance between robustness, effectiveness, and computational cost. However, the primary focus of our study is to highlight the pervasive issue of model overconfidence, particularly in SGD-trained CNN models, when exposed to various types of natural corruption. This key limitation, observed consistently in such models, has been the central point of exploration in our analysis.\"}", "{\"title\": \"Transformers\", \"comment\": \"**Results on Transformers:** Our preliminary findings reveals that the transformer models are equally vulnerable to common corruption and overconfidence through SGD is a primary reason. To verify that, have performed experiments with SWAG and observed significant reductions in overconfidence and boost in classification performed on each dataset including ImageNet-C.\"}", "{\"comment\": \"The maximum perturbation \\\\( epsilon = 0.003 \\\\) is used in PGD attacks, which is less than \\\\( 1/255 \\\\) (a single pixel's intensity level), and is indeed quite small. As such, it may not sufficiently represent realistic adversarial scenarios, and its conclusions could be limited in scope. To provide a more comprehensive evaluation, it would be beneficial to test robustness over a broader range of maximum perturbations, such as \\\\( 1/255 \\\\) to \\\\( 8/255 \\\\). This range aligns better with common adversarial settings and would allow for a deeper understanding of the model's performance under varying levels of perturbation intensity.\"}", "{\"title\": \"Response (Part-2): Adversarial Attacks and Beyond\", \"comment\": \"To further extend our investigation, we conducted additional experiments using PGD attacks on both CIFAR-10 and CIFAR-100 datasets. These experiments revealed consistent trends with our observations on natural corruptions: SGD-trained models exhibit significant overconfidence under adversarial perturbations as well.\\n\\n**Evaluation of Robustness under FGSM Attacks**\\n\\nUnder the FGSM (Fast Gradient Sign Method) attack, the SGD-trained model achieved an accuracy of 27%, while the SWAG-trained model demonstrated improved robustness with an accuracy of 35%. The hyperparameters for this attack were a maximum perturbation (\\u03f5\\\\epsilon\\u03f5) of 0.03 with a single-step gradient. The results highlight that SWAG\\u2019s convergence to wide minima reduces the model\\u2019s sensitivity to small, gradient-based perturbations, which are commonly exploited by FGSM, thereby enhancing robustness compared to the SGD-trained model.\\n\\n**Evaluation of Robustness under C&W Attacks**\\n\\nFor the Carlini & Wagner (C&W) attack, which employs optimization-based perturbations, the SWAG-trained model achieved an accuracy of 78%, outperforming the SGD-trained model\\u2019s accuracy of 71%. The attack was configured with a confidence parameter (ccc) of 10, a learning rate of 0.01, and 1,000 iterations. \\n\\n**Evaluation of Robustness under PGD Attacks**\\n\\nUnder the iterative PGD (Projected Gradient Descent) attack, the SWAG-trained model showed a significant improvement in robustness, achieving an accuracy of 75% compared to the SGD-trained model\\u2019s accuracy of 45%. The hyperparameters used for PGD were a maximum perturbation (\\u03f5\\\\epsilon\\u03f5) of 0.003 and a step size of 0.008. These results reinforce SWAG\\u2019s advantage in achieving robustness, as its wide minima reduce the model\\u2019s sensitivity to iterative and accumulated perturbations, unlike SGD, which converges to sharp minima.\\n\\n5. We acknowledge the initial limitations in the scope of our evaluations. In response to your concerns:\", \"models\": \"We acknowledge the importance of modern architectures, such as Transformers, in benchmarking robustness. While we are actively working to extend our analysis to include such architectures, our current focus on convolutional architectures like VGG and ResNet provides a solid foundation for understanding overconfidence in models. These architectures remain widely used and serve as a meaningful starting point for this research. Preliminary results on Transformer-based architectures show trends similar to those observed in CNNs (e.g., VGG, ResNet), highlighting the generality of our findings across modern architectures. This extension, while ongoing, provides additional support for our contributions.\\n\\n**Datasets:** We have expanded our analysis to include results on the ImageNet-C dataset, which serves as a more comprehensive and challenging benchmark for assessing model robustness under various corruptions. These experiments reinforce our hypothesis that SGD-trained models, regardless of their architecture, tend to exhibit significant overconfidence when exposed to noisy or corrupted inputs. For example, the clean ImageNet accuracy of ResNet-50 improves from 82% with SGD to 91% with SWAG. When tested under Brightness noise, the accuracy of the SGD-trained model drops drastically to 15%, whereas the SWAG-trained model achieves a considerably higher accuracy of 47%, demonstrating its robustness to such perturbations. These findings highlight the effectiveness of SWAG in mitigating overconfidence and improving robustness in the presence of input corruption.\"}", "{\"metareview\": \"This paper focused on studying the underlying reason behind the vulnerabilities of deep neural networks to adversarial corruptions. It found that model confidence could be a key factor. Based on the observation, the paper further proposed using Stochastic Weight Averaging Gaussian (SWAG) for DNN calibration. The experiments on multiple datasets prove the effectiveness of the method.\\n\\nDifferent from prior work that tried to detect corrupted images or mitigate corruptions, this paper tried to understand the vulnerabilities and developed a method based on the motivation. Drawing the connection between model confidence and corruption robustness is novel and provides new insights. The paper is well-written. However, there are few weaknesses identified by the reviewers. First, the technical contribution is not enough, as the paper simply applies the Stochastic Weight Averaging Gaussian (SWAG) with some necessary modifications. Second, the initial paper lacks in extensive experiments on larger dataset (e.g., ImageNet) and models (Transformers). The authors made an effort to provide more experiments on larger datasets, more attacks, and new architectures. There are still some concerns about the novelty and comprehensiveness. \\n\\nAfter thorough discussions, the reviewers reached a consensus that the paper needs further improvements to address the technical contribution. Therefore, the AC considers that the paper falls short of the ICLR acceptance threshold and recommends rejection.\", \"additional_comments_on_reviewer_discussion\": [\"The reviewers initially raised several concerns of the paper:\", \"Reviewer iz8w raised the concerns about limited novelty and experimental scope of the paper. The authors tried to clarify the novelty of the new observation on the connection between model confidence and corruption robustness. They also provided initial experiments on Transformer models.\", \"Reviewer gZH2 raised the concerns about technical novelty, limited architectures, and lack of experiments in some aspects. The authors tried to address them by providing detailed description on novelty, extending to new architectures and providing new experiments.\", \"Reviewer obon raised the concerns about limited architectures, limited datasets, and lack of comparisons with advanced defenses. The authors provided more experiments on other architectures and datasets to address the concerns.\", \"Reviewer qqpm raised the concerns about limited technical contribution and lacking theoretical analysis. The authors tried to clarify the contribution and the reviewer improved the rating to 6.\", \"After author-reviewer discussion and AC-reviewer discussion, the reviewers and AC reached a consensus that the paper has limited technical contribution and some of the experiments are lacking to sufficiently demonstrate the effectiveness of the method. Therefore, AC would recommend rejection.\"]}", "{\"title\": \"Response\", \"comment\": \"*Our primary novelty lies in:*\\n\\n**Highlighting Overconfidence in CNNs:** We systematically analyze and benchmark how CNN architectures, when trained with SGD, exhibit overconfidence in their predictions under corrupted datasets. This issue significantly impacts their robustness and has not been thoroughly explored in the context of corrupted data.\\n\\n**Reliability Diagrams and Analysis:** We provided detailed reliability diagrams to visualize and quantify the overconfidence of models. Importantly, the generation of these diagrams and the insights derived from them are not tied to SWAG and can be applied independently.\\n\\n**Benchmark for Corruptions:** Our study serves as a benchmark for understanding and addressing model overconfidence in corrupted datasets, which can guide future research in robustness and calibration.\\nOur primary concern and contribution lie in analyzing the behavior of CNNs under corruption and highlighting their limitations in terms of overconfidence and reliability.\\n\\n**Regarding Larger Datasets and Advanced Models**\\n\\n**Models:** We acknowledge the initial limitations in the scope of our evaluations. In response to your concerns: Models: We acknowledge the importance of modern architectures, such as Transformers, in benchmarking robustness. While we are actively working to extend our analysis to include such architectures, our current focus on convolutional architectures like VGG and ResNet provides a solid foundation for understanding overconfidence in models. These architectures remain widely used and serve as a meaningful starting point for this research. Preliminary results on Transformer-based architectures show trends similar to those observed in CNNs (e.g., VGG, ResNet), highlighting the generality of our findings across modern architectures. This extension, while ongoing, provides additional support for our contributions.\\n\\n**Datasets:** We have expanded our analysis to include results on the ImageNet-C dataset, which serves as a more comprehensive and challenging benchmark for assessing model robustness under various corruptions. These experiments reinforce our hypothesis that SGD-trained models, regardless of their architecture, tend to exhibit significant overconfidence when exposed to noisy or corrupted inputs. For example, the clean ImageNet accuracy of ResNet-50 improves from 82% with SGD to 91% with SWAG. When tested under Brightness noise, the accuracy of the SGD-trained model drops drastically to 15%, whereas the SWAG-trained model achieves a considerably higher accuracy of 47%, demonstrating its robustness to such perturbations. These findings highlight the effectiveness of SWAG in mitigating overconfidence and improving robustness in the presence of input corruption.\\n\\n2. To further extend our investigation, we conducted additional experiments using PGD attacks on both CIFAR-10 and CIFAR-100 datasets. These experiments revealed consistent trends with our observations on natural corruptions: SGD-trained models exhibit significant overconfidence under adversarial perturbations as well.\\n\\n**Evaluation of Robustness under FGSM Attacks**\\n\\nUnder the FGSM (Fast Gradient Sign Method) attack, the SGD-trained model achieved an accuracy of 27%, while the SWAG-trained model demonstrated improved robustness with an accuracy of 35%. The hyperparameters for this attack were a maximum perturbation (\\u03f5\\\\epsilon\\u03f5) of 0.03 with a single-step gradient. The results highlight that SWAG\\u2019s convergence to wide minima reduces the model\\u2019s sensitivity to small, gradient-based perturbations, which are commonly exploited by FGSM, thereby enhancing robustness compared to the SGD-trained model.\\n\\n**Evaluation of Robustness under C&W Attacks**\\n\\nFor the Carlini & Wagner (C&W) attack, which employs optimization-based perturbations, the SWAG-trained model achieved an accuracy of 78%, outperforming the SGD-trained model\\u2019s accuracy of 71%. The attack was configured with a confidence parameter (ccc) of 10, a learning rate of 0.01, and 1,000 iterations. \\n\\n**Evaluation of Robustness under PGD Attacks**\\n\\nUnder the iterative PGD (Projected Gradient Descent) attack, the SWAG-trained model showed a significant improvement in robustness, achieving an accuracy of 75% compared to the SGD-trained model\\u2019s accuracy of 45%. The hyperparameters used for PGD were a maximum perturbation (\\u03f5\\\\epsilon\\u03f5) of 0.003 and a step size of 0.008. These results reinforce SWAG\\u2019s advantage in achieving robustness, as its wide minima reduce the model\\u2019s sensitivity to iterative and accumulated perturbations, unlike SGD, which converges to sharp minima.\"}", "{\"summary\": \"This paper investigates the correlation between the confidence of deep neural networks and their vulnerability to natural corruptions. Specifically, the authors leverage the model calibration method SWAG to construct a smoothed model. The parameter of this model is sampled and averaged from the estimated Gaussian distribution of several versions of model parameters recorded during training. The evaluation on the widely used natural corruption benchmark CIFAR-10-C for VGGNet and PreActResNet has shown the robustness of the smoothed model against natural corruptions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"**Novel insight**: this paper is the first to leverage model calibration method to mitigate natural corruptions.\", \"**Promising experiment results**: the leveraged SWAG method substantially improves the robustness of CNNs against natural corruptions.\", \"**Well-written paper**: the paper is well-organized and easy to follow.\"], \"weaknesses\": [\"**Limited technical contribution**: the methodology in Section 3 is originally proposed in SWA and SWAG. This paper has not introduced further adjustment or improvement when applying the method to mitigating natural corruptions.\", \"**Lacking theoretical analysis**: the experiments have shown the effectiveness of SWAG in improving the robustness against corruptions. However, no theoretical analysis is provided to help better understand the source of the robustness.\"], \"questions\": [\"For Figure 1, how is the reliability plot plotted? Specifically, which hyper-parameter is adjusted to control the confidence of the model, and how is it adjusted?\", \"How does the proposed method perform on larger datasets like ImageNet-C? Can this method survive different set of natural corruptions other than those in CIFAR-10-C, e.g., ImageNet-P, ImageNet-$\\\\bar{C}$ [1]?\", \"[1] On interaction between augmentations and corruptions in natural corruption robustness. NeurIPS 2021.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response: Adversarial Training\", \"comment\": \"**Comparison with Adversarial Training (AT):** AT is one of the strongest defense against adversarial perturbation; however, its effectiveness against corruptions are not adequately studied. Although, based on the suggestion of the reviewers, we have performed the comparison of the proposed work with AT. The comparison can be performed on atleast three following perspectives: (i) computational cost, (ii) accuracy on clean images, and (iii) handling of corruptions. The proposed SWAG model is found computationally lighter as compared to AT in terma of training time. For example, the PGD AT model on the CIFAR-10 dataset took approximately 250 minutes; whereas, the computational time of SWAG is 170 minutes on the similar GPU machine. Further, as well known, the AT show significant reduction in clean accuracy. The proposed model not only maintain clean accuracy but even improve it as compared to traditional SGD trained models. The similar effectiveness and strength can be seen in handling corruption, where, the accuracy of the proposed model is atleast 15\\\\% better than the PGD AT model.\"}", "{\"title\": \"Response\", \"comment\": \"We want to thank the reviewer for raising the rating and giving us another chance to address the concern.\\n\\n---------------------------------------\\n**Extension and Novelty:** We have already begun to expand and enhance the methodology to address the limitation of SWAG. One key limitation of SWAG, which we acknowledge, is its unimodal nature. By approximating the posterior distribution as a single Gaussian, SWAG inherently struggles to capture more complex uncertainty landscapes, particularly in cases where the true weight distribution may exhibit multimodality or other non-Gaussian characteristics. Recognizing this constraint, we are actively working on extending the framework to incorporate more flexible and expressive distributions.\\n\\nSpecifically, we are exploring the use of mixtures of Gaussians to model multimodal posterior distributions. This extension allows the framework to represent multiple modes in the uncertainty space, thereby providing a richer understanding of model uncertainty. Sampling from this mixture of Gaussians implicitly generates an ensemble of models, each corresponding to weights from different modes of the loss landscape, which will improve the generalization. \\n\\n**Our preliminary results using the VGG model on the CIFAR-10 dataset showcase the improvement of at least 5\\\\% across corruptions.** As said, we are actively working in this direction to improve SWAG and are committed to adding these findings to the camera-ready paper.\\n\\nWe believe that these extensions will significantly improve the ability of the framework to address overconfidence and provide a more robust approach to uncertainty modeling.\\n\\n\\n-----------------------\\n\\nFurther, as mentioned earlier and asserted that the proposed approach can provide defense against adversarial attacks as well, we have extensive experiments with different adversarial parameters. The results of two popular adversarial attacks under varying perturbation norms are reported below.\", \"table_1\": \"Accuracy with different perturbation values under PGD attack using CIFAR-10 dataset and VGG network.\\n\\n| Epsilon (Max Perturbation) | SWAG | SGD |\\n|----------------------------|---------|--------|\\n| 1/255 | **75.68**% | 45.21% |\\n| 2/255 | 72.50% | 39.50% |\\n| 3/255 | 68.74% | 34.89% |\\n| 4/255 | 62.08% | 29.08% |\\n| 5/255 | 57.56% | 24.37% |\\n| 6/255 | 52.67% | 19.12% |\\n| 7/255 | 46.87% | 17.54% |\\n| 8/255 | **41.56**% | 14.24% |\", \"table_2\": \"Accuracy with different perturbation values under FGSM attack using CIFAR-10 dataset and VGG network.\\n\\n| Epsilon (Max Perturbation) | SWAG | SGD |\\n|----------------------------|---------|--------|\\n| 1/255 | **61.24**% | 46.78% |\\n| 2/255 | 55.89% | 41.24% |\\n| 3/255 | 51.24% | 37.89% |\\n| 4/255 | 47.89% | 35.29% |\\n| 5/255 | 44.36% | 32.56% |\\n| 6/255 | 41.15% | 30.21% |\\n| 7/255 | 38.19% | 29.43% |\\n| 8/255 | **35.24**% | 27.17% |\\n\\n**From the results it can be observed that the SWAG model is not only effective in handling common corruptions but also the adversarial perturbation with a significantly higher margin than SGD.** We believe such universality and extensive analysis can help in building a **universal** defense architecture.\\n\\n**We hope all these new results address the concerns of the reviewers and look forward to the upgrade to the rating of the paper.**\"}", "{\"comment\": \"PGD attacks typically have higher attack success rates than FGSM because PGD iteratively refines the perturbations, making it more effective at finding adversarial examples. The fact that PGD appears less successful than FGSM in these tables raises questions about the experimental setup.\\n\\n1. How many iterations and what step size were used for the PGD attacks? Could the number of iterations be too small, resulting in suboptimal adversarial examples?\\n\\n2. Is the PGD attack implementation correct?\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"New Results Alert\", \"comment\": \"**PGD with epsilon = 0.03 and step size = 4/255**\", \"80_iterations\": \"Even with higher iterations, the accuracy of SWAG is 3 times better than that achieved with SGD.\", \"100_iterations\": \"The accuracy of SWAG is close to 7\\\\%, whereas, SGD yields 0.0\\\\%.\\n\\n----------\\nWith epsilon= 8/255, step size = 2/255, and iterations= 100, the SGD model yields 3.6\\\\% and SWAG yields 19.3\\\\%.\\n\\n**Further, by these new results, it is to note here that the concern (Reviewer obon) of PGD not performing better than FGSM is also resolved (PGD yield lower accuracy even close to zero sometimes)**.\\n\\n**These new results also suggest advantage of choosing SWAG over SGD and understanding the fact that SGD inherently bring overconfidence to the networks which leads to their vulnerability to corruption and adversarial perturbations.**\\n\\nThanks\"}" ] }
0IhoIn0jJ3
Inference of Sequential Patterns for Neural Message Passing in Temporal Graphs
[ "Jan von Pichowski", "Vincenzo Perri", "Lisi Qarkaxhija", "Ingo Scholtes" ]
The modelling of temporal patterns in dynamic graphs is an important current research issue in the development of time-aware Graph Neural Networks (GNNs). However, whether or not a specific sequence of events in a temporal graph constitutes a temporal pattern not only depends on the frequency of its occurrence. We must also consider whether it deviates from what is expected in a temporal graph where timestamps are randomly shuffled. While accounting for such a random baseline is important to model temporal patterns, it has mostly been ignored by current temporal graph neural networks. To address this issue we propose HYPA-DBGNN, a novel two-step approach that combines (i) the inference of anomalous sequential patterns in time series data on graphs based on a statistically principled null model, with (ii) a neural message passing approach that utilizes a higher-order De Bruijn graph whose edges capture overrepresented sequential patterns. Our method leverages hypergeometric graph ensembles to identify anomalous edges within both first- and higher-order De Bruijn graphs, which encode the temporal ordering of events. Consequently, the model introduces an inductive bias that enhances model interpretability. We evaluate our approach for static node classification using established benchmark datasets and a synthetic dataset that showcases its ability to incorporate the observed inductive bias regarding over- and under-represented temporal edges. Furthermore, we demonstrate the framework's effectiveness in detecting similar patterns within empirical datasets, resulting in superior performance compared to baseline methods in node classification tasks. To the best of our knowledge, our work is the first to introduce statistically informed GNNs that leverage temporal and causal sequence anomalies. HYPA-DBGNN represents a promising path for bridging the gap between statistical graph inference and neural graph representation learning, with potential applications to static GNNs.
[ "graph neural networks", "temporal patterns", "higher order network", "random graph ensembles" ]
Reject
https://openreview.net/pdf?id=0IhoIn0jJ3
https://openreview.net/forum?id=0IhoIn0jJ3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uDNpDUoliI", "u3xrl8trVh", "ih9DNMjyDx", "hw1nWbLPxL", "bWjDhwKDRK", "Zlc3AgSa0U", "Sh5O1Z1IU0", "RoGnj5jibg", "LkJuPkF2kW", "K80mIDRpug", "JHTRID9VpY", "J0ClgLVByU", "D4SlqqD1qx", "8ojtKOXZOL", "7hdW95qYyd" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732404198166, 1731108696611, 1730378976934, 1732584712421, 1732097501553, 1732737271000, 1732096668722, 1737523761437, 1732677495536, 1730516849493, 1732096952758, 1730685514947, 1732096805204, 1734406746065, 1732096516196 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6317/Reviewer_LQXN" ], [ "ICLR.cc/2025/Conference/Submission6317/Reviewer_jhfT" ], [ "ICLR.cc/2025/Conference/Submission6317/Reviewer_Y5VP" ], [ "ICLR.cc/2025/Conference/Submission6317/Reviewer_Y5VP" ], [ "ICLR.cc/2025/Conference/Submission6317/Authors" ], [ "ICLR.cc/2025/Conference/Submission6317/Reviewer_jhfT" ], [ "ICLR.cc/2025/Conference/Submission6317/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6317/Reviewer_UJFa" ], [ "ICLR.cc/2025/Conference/Submission6317/Reviewer_UJFa" ], [ "ICLR.cc/2025/Conference/Submission6317/Authors" ], [ "ICLR.cc/2025/Conference/Submission6317/Reviewer_LQXN" ], [ "ICLR.cc/2025/Conference/Submission6317/Authors" ], [ "ICLR.cc/2025/Conference/Submission6317/Area_Chair_oJmz" ], [ "ICLR.cc/2025/Conference/Submission6317/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I acknowledged your reply, and have briefly reviewed other response.\\nI understand the limitation of your experiments, and I can hold my points of adding new baselines and datasets (while I still believe it is necessary to improve paper quality).\\nHowever, I still need several points to be addressed to improve my score from high to low:\\n1. Based on your response, I understand the task is improving static node prediction on temporal graph by a method with better (potential) explainability. However, I don't think the experiment is sound enough to claim this: As I said, **your experimental results on real datasets has giant confidence interval, and mix with other baselines**; Besides, I am not persuaded by the conclusion thats some path can only be captured by your method, e.g., you should try compare with explainable GNN (like GNNExplainer) on that. **From my view, the best way is to give a counterexample, theoretically (not empirically) prove why some temporal path can only captured by your method, not any other baselines.**\\n2. I think explainability should be a contribution of your method, thus add explainability extraction to your method will improve the score, e.g., **how to extract key/anomalous path from your deep model inference**.\\n3. As point out also by other authors, scalability is also something to defail. For example, in finance domain, the transaction amount can be a giant number, and building and computing over hypergraph is even more costy, **how to achieve a feasible training and inference on giant temporal graph (at least million nodes) should be discussed**.\"}", "{\"summary\": \"This paper studies how to model temporal patterns in dynamic graphs and proposes to use statistical graph inference to identify sequence anomalies for graph augmentation and perform message passing on it to capture inductive biases of sequence patterns. The effectiveness of the model is tested on a synthetic dataset and five empirical datasets for static node classification.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The idea of augmenting the input graph for message passing using a statistical null model to detect abnormal temporal patterns and distinguish sequences beyond frequency is interesting.\", \"The adapted HYPA offers an interpretable way to identify unusual sequences in dynamic graphs, and the proposed HYPA-DBGNN achieves improved performance over baseline models on multiple empirical datasets.\"], \"weaknesses\": [\"The core techniques of using De Bruijn graphs and hypergeometric testing are well established in time series data analysis. The proposed HYPA-DBGNN is, to some extent, an interesting adaptation for GNNs.\", \"Using De Bruijn graphs with statistical augmentation is a sound approach. However, the paper would benefit from more discussion on why it is optimal for this purpose under the setting for node classification on time-varying graphs, rather than simply improving from DBGNN.\", \"The evaluation focuses on a limited set of small human interaction networks. Testing on a more diverse set of temporal datasets would better substantiate the model\\u2019s broader applicability and generalizability across domains.\"], \"questions\": [\"Q1 The authors state that computational complexity may not be a limiting factor. Could the authors further clarify the complexity increased from DBGNN. How would they compare to standard temporal GNNs? Meanwhile, all datasets used for evaluation have less than 500 nodes, can the proposed method scale to larger graphs?\", \"Q2 The results in Tabe 1 on synthetic data try to highlight patterns that only high-order models can discern. However, the results are not convincing or interpretable, especially the discussion of the baseline HONEM (even a strong one in Table 2) is very limited.\", \"Q3 The proposed method claims to have better interoperability by introducing HYPA. Could the authors elaborate more on how it is made more expressive by not relying on the transitivity assumption?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces HYPA-DBGNN, a graph augmentation architecture focused on temporal graph learning. It encodes sequential pattern dynamics in first- and higher-order De Bruijn graphs and corrects graph structures using anomaly statistics. HYPA-DBGNN computes HYPA scores via hypergeometric ensembles to assess edge frequency differences from a random model, adjusting weights to improve accuracy. It uses a multi-order message passing scheme with inductive bias, incorporating HYPA scores and ReLU activation while preserving graph sparsity to optimize efficiency.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper introduces De Bruijn graphs into temporal graph analysis, which I find to be a novel approach.\\n\\n2. The paper conducts extensive experiments to demonstrate the effectiveness of the proposed method.\", \"weaknesses\": \"1.The paper's exposition is not very clear, with many key pieces of information relegated to the appendices.\\n\\n2.The paper does not clearly explain why the introduction of De Bruijn graphs enhances performance, making it seem more like a simple combination of existing methods.\\n\\n3.The explanation of the method is insufficiently clear; a framework diagram could be helpful.\", \"questions\": \"1. Could the authors explain the role of De Bruijn graphs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks\", \"comment\": \"Thank you to the authors for their responses. I will maintain my current rating.\"}", "{\"comment\": \"We thank you for your helpful review and the positive comments about our work. We have addressed the main points in the aggregate response to all reviewers.\\n\\nTo answer the question about the role of the De Bruijn graphs we like to refer to the framework diagram in appendix C. For clarity, we consider moving this to the main part of the manuscript. The higher-order De Bruijn graph encodes the transitive dependencies found in the data set. We enhance that information based on the described null model to make the model able to express even more patterns. Different orders of the De Bruijn graph are used to encode first- and higher-order dependencies.\"}", "{\"comment\": \"Thank you for your response. While I appreciate the effort, I feel that some key concerns remain unaddressed.\\n\\nYour explanation on the computational complexity and scalability of HYPA-DBGNN compared to standard temporal GNNs was insufficient (actual runtime comparison and preprocessing cost remain unclear). Additionally, the interpretability of synthetic data results still lacks clarity and convincing explanation. I also believe that the discussion around expressivity and transitivity could benefit from more specific examples with HYPA (besides Fig. 4 in Appx. I).\\n\\nBased on my above reviews and authors' responses, I stand by my initial rating. I encourage further refinement in these areas to improve the manuscript.\"}", "{\"comment\": \"We thank you for your supportive review and the positive feedback about our work. We have addressed most of your questions in the aggregate response to all reviewers.\\n\\n.\", \"we_additionally_like_to_address_why_we_focus_on_the_given_data_sets\": \"The choice of data used in our evaluation was based on the need to have a sufficiently large number of observed interactions compared to the number of nodes and edges. This is necessary in order to observe a number of time-respecting paths that is sufficiently large to establish significant deviations from the expected values calculated from the model. Most available large data sets on temporal graphs have large numbers of nodes and edges, but are too sparse in terms of observed time-respecting paths. \\n\\n.\", \"regarding_the_performance_of_honem\": \"The synthetic data sets contain random fluctuations in the possibly higher-order edge frequency statistics due to the randomized creation and splitting. HONEM that heavily utilized higher-order edge frequencies seems to learn these fluctuations and not the pattern because the standard deviation is high and the performance does not increase for the data set with pattern.\\nAlso for the empirical data HONEM relies on the higher-order edge frequencies. As a result it performs better than first-order methods, especially in data sets like Hospital or Workplace where the higher-order statistics are linked to the classification task (see Fig. 2).\\n\\n.\", \"regarding_the_transitivity_assumption\": \"We indeed encode the transitivity with the edges of the De Bruijn graph. The edge weight is extended by the HYPA scores such that we are able to express the representativeness of the transitive dependencies. We explain the example for the increased expressivity in the general response.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for addressing my questions, but I didn't see any serious effort made to address any of the weaknesses that were pointed out. Because of this, I will be holding my score.\"}", "{\"summary\": \"This work introduces a model termed HYPA-DBGNN, which seeks to improve the ability of a GNN in temporal settings to learn high-order time dependent interactions. HYPA-DBGNN has two components, HYPA which detects the ``surprise'' of observing a specific walk, and DBGNN which performs a hypergeometric walk feature extraction. The authors detail this model as an extension of DBGNN, and present experiments which show promising performance gains.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well organized and motivated.\\n2. The problem of extracting complex relationships from transitions between vertices is an interesting problem with many industrial applications\\n3. The experiments that are presented appear to be carefully performed and well motivated. The results as presented provide evidence that the method works.\", \"weaknesses\": \"1. The paper is unclear in spots. For example, The concept of a De Bruijn graph is mentioned but its basic properties are not discussed.\\n2. The mathematical notation is intricate and can be difficult to follow, with some symbols overlapping with standard symbols from the literature. For example, $H(v)$ is the sum of $HYPA$ factors but is traditionally the hidden representation for all vertices.\\n3. The intuition for \\n4. Minor typos and grammatical issues make the paper somewhat difficult to follow. For example, `fist` -> `first` on line 314. \\n5. Experiments in section 5.2 seem to lack many modern baselines including CAWN, TGAT, DySAT, and others. I would recommend that the authors add additional baselines. Random walk GNNs such as RWGNN could be applicable here as well, as could transformer architectures.\\n6. The experimental setup is unclear in spots, the baselines may have been untuned, and the graphs are small.\", \"questions\": \"1. Does this new inductive bias lead to a provably more expressive GNN than previous temporal MPNNs?\\n2. What is the run-time scaling of HYPA-DBGNN? All experiments were run on quite small graphs, so it's hard to understand how scalable of a technique this is.\\n3. To what extent has hyperparameter tuning been performed?\\n4. What explains HONEM's good performance in 5.1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank you for your insightful review and the positive comments about our work. We have addressed most of your questions in the aggregate response to all reviewers.\\n\\n.\\n\\nRegarding the **hyper parameters**, we like to refer to section 5.1 for an explanation of the used hyper parameters. As stated, we also tune the hyper parameters for the baselines. The same setup is used for tuning and evaluating our model and the baselines. In response to reviewer LQXN, we will move this explanation to the appendix and add another table to make the hyper parameter ranges and search even more clear. \\n\\n.\", \"regarding_the_choice_of_data_sets_used_in_our_evaluation\": \"this was based on the need to have a sufficiently large number of observed interactions compared to the number of nodes and edges. This is necessary in order to observe a number of time-respecting paths that is sufficiently large to establish significant deviations from the expected values calculated from the model. Most available large data sets on temporal graphs have large numbers of nodes and edges, but are too sparse in terms of observed time-respecting paths.\\n\\n.\", \"regarding_the_performance_of_honem\": \"The synthetic data sets contain random fluctuations in the possibly higher-order edge frequency statistics due to the randomized creation and splitting. HONEM that heavily utilized higher-order edge frequencies seems to learn these fluctuations and not the pattern because the standard deviation is high and the performance does not increase for the data set with pattern.\\nAlso for the empirical data HONEM relies on the higher-order edge frequencies. As a result it performs better than first-order methods, especially in data sets like Hospital or Workplace where the higher-order statistics are linked to the classification task (see Fig. 2).\"}", "{\"summary\": \"This work focuses on relatively novel task, static node property classification for temporal graphs. Different from common trend of temporal graph neural networks, it proposes HYPA-DBGNN that extends a previous work GBGNN (which combines static hyper-order graph neural network on a high-order De Bruijn Graph constructed from time series) by null model correction.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"This work focuses on static node property classification on temporal graph, which is task lack of exploring.\"], \"weaknesses\": [\"The notation is lack of consistency, making it hard to follow and the clarify of method details being quite poor. (See questions)\", \"This paper focuses on a rare task, which I think more real-world justification is needed? For example, what are real-word scenarios? You can pick one of your dataset to explain this in more detail.\", \"The contribution of the proposal is slightly unclear. It seems that this work simply extends related work DBGNN by introducing null model correction. If my understanding is correct, I think more theoretical justification of the necessity of this correction should be provided, otherwise, the contribution seems to be limited.\", \"In both the synthetic and real-world experiments, the variance are very large, makes me doubt if the problem is formalized correctly.\", \"Compare to highly related baseline DBGNN, the experiment results is not quite impressive (confident interval overlaps a lot in many tasks). This makes the contribution of null model correction less sound given no theoretical justification of the necessity.\"], \"questions\": [\"Figure 1: Why we construct a higher-order edge for count 0? Besides, shouldn't we have an arrow from (a) to (d) since we also need 1-order counts to construct 1-order graph in (d)?\", \"Figure 1: You should extend the figure with how null model in (b) and weights in (c) are really generated, or provide in appendix? This figure fails to explain what you did for (b) and (c) given the poor explanation of section 4.\", \"line 262: What is $X_{uv}$ and $f(u, v)$? Why they are independent from order $k$?\", \"line 282: Shouldn't $H(v)$ rely on order $k$ based on your definition? Same to equation (1).\", \"Page 6: Mix use of higher order nodes and nodes make the notation is bit hard to follow, recommend to replace $v$ by $v^{(k)}$ in all related content, or vector form $\\\\mathbf{v}$. Then, you can claim that $k = 1$ is omitted by default.\", \"line 292: Why map $h^{1, 0}$ to $h^{k, 1}$ rather than $h^{k, 0}$?\", \"line 295: Can you provide more explanation how this bipartition is analogous to Markov chain?\", \"line 304: What is $g$?\", \"Why this design is limited to temporal node classification? I think this architecture can be used for regression without any modification.\", \"line 331-351: Hyperparamter configuration can be moved to appendix so that you can have more space to improve clarity of algorithm design sections.\", \"Experiment: You are comparing with a lot of simple baselines for static graph with only on temporal graph baseline. Based on [1], static and temporal graph representation are indeed equivalent, especially you are performing static node classification on temporal graph. Why you don't compare with other basics such as GAT, GIN, TGAT, DySAT (see [1]), and other state-of-the-art like PNA, PINE, GraphTransformer.\", \"Given that TGN is designed mainly for evolving graph, should you make some modification to make comparison fair? For example, average node representation of different timestamps for perform static node classification on temporal graph?\", \"Your font looks different from template. I think you need to check if you are using the template correctly.\", \"[1] Gao, Jianfei, and Bruno Ribeiro. \\\"On the equivalence between temporal and static equivariant graph representations.\\\" International Conference on Machine Learning. PMLR, 2022.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank you for the detailed review and the check of our notation. In the following we will focus on the questions except for the notation. But we like to highlight that those notation questions substantially improve our manuscript. The remaining questions are answered in the general response.\", \"regarding_your_questions\": \".\\n\\n**For example, what are real-word scenarios**? \\n\\nOur work exploits patterns in dependencies, e.g. in interactions of social networks. Those are relevant in different domains.\\n\\nIn the following we refer to the workplace and hospital data sets. Here, we predict the role of the employees. With this prediction, we can make suggestions for new positions in the company based on the interaction patterns. This knowledge gained from interaction patterns can also be used to retrain employees to better fit their position. Both measures may increase the efficiency of the company.\\n\\nAnother not yet mentioned example is the detection of fraud in a financial context. Companies like [Robinhood Europe, U.A.B.](https://newsroom.aboutrobinhood.com/preventing-fraud-at-robinhood-using-graph-intelligence/?utm_source=blog.quastor.org&utm_medium=referral&utm_campaign=how-robinhood-uses-graph-algorithms-to-prevent-fraud) monitor trading movements and want to identify malicious members. Here a null model based approach can be essential to give important insights. Even though we are not aware of public benchmark data sets in this area we consider this as a very important use case that is not deeply explored, yet.\\n\\nWe thank you for this question and improve the motivation in the final manuscript. \\n\\n.\\n\\n**Can you provide more explanation how this bipartition is analogous to Markov chain**?\\n\\nThe higher-order graph models dependencies over multiple steps. The first bipartite layer maps the first-order dependency to the corresponding higher-order dependency that continues the chain. The final bipartite layer maps the higher-order dependency to the corresponding first-order dependency that again continues the chain. Hence, no information from the dependencies more than one step away are passed to the succeeding dependencies. \\n\\n.\\n\\n**Why this design is limited to temporal node classification? I think this architecture can be used for regression without any modification.**\\n\\nWe agree that there is no technical limitation to restrict or model to node classification. Static regression tasks for graphs with temporal patterns are an interesting avenue. However, to this point, we are not aware of suitable benchmark data sets. We are delighted to discuss further directions and references in this area.\\n\\n.\\n\\nAll in all, we like to thank you for the very detailed review and suggestions that result in a lasting improvement to our manuscript.\"}", "{\"metareview\": [\"## Summary\", \"The paper proposes HYPA-DBGNN, a method for static node classification on temporal graphs. The model uses De Bruijn graphs to encode sequential patterns and employs a null model correction via hypergeometric testing to identify and adjust for anomalous temporal patterns. Message passing is then performed on the augmented graph to capture inductive biases. Experiments on synthetic and real-world datasets demonstrate performance improvements over baselines.\", \"## Strengths\", \"Interesting Use of De Bruijn Graphs: The application of De Bruijn graphs for temporal graph augmentation is novel and provides a structured way to model sequential dynamics.\", \"The introduction of a null model (HYPA) for anomaly detection adds interpretability and effectively distinguishes temporal sequences based on frequency.\", \"The paper evaluates the method on both synthetic and empirical datasets, showcasing improvements in accuracy over prior approaches.\", \"## Weaknesses\", \"The method appears to be an incremental extension of DBGNN with null model correction. The need for this correction lacks strong theoretical justification.\", \"The paper suffers from unclear notations, insufficient explanation of key concepts (e.g., De Bruijn graphs), and missing details relegated to appendices, making it difficult to follow.\", \"Inadequate Baselines and Experiments: The evaluation excludes recent temporal GNN baselines (e.g., CAWN, TGAT, DySAT) and focuses on small datasets, limiting the scope and impact of the results. Additionally, performance gains over DBGNN are modest, with overlapping confidence intervals.\", \"While the paper introduces a novel application of De Bruijn graphs and a statistical null model for temporal graph augmentation, it lacks sufficient novelty, clarity in presentation, and comprehensive experimental validation. The method\\u2019s incremental nature, unclear theoretical motivation, and omission of strong baselines limit its overall contribution and impact.\"], \"additional_comments_on_reviewer_discussion\": [\"The common major concerns raised by the reviewers are:\", \"Clarity and Presentation Issues: The paper is difficult to follow due to inconsistent or unclear notations, lack of explanations for key concepts (e.g., De Bruijn graphs), and important details being placed in the appendix. The presentation suffers from typos, unclear mathematical exposition, and missing visual aids (e.g., a framework diagram).\", \"Limited Novelty and Theoretical Justification: The method appears to be an incremental extension of DBGNN, with the null model correction lacking strong theoretical justification. The rationale for using De Bruijn graphs and why they enhance performance in this specific setting is not adequately explained.\", \"Insufficient Experiments and Baselines: The experiments are conducted on small datasets and lack comparisons with modern temporal GNN baselines (e.g., CAWN, TGAT, DySAT). Reported performance improvements over DBGNN are modest, with overlapping confidence intervals, raising concerns about the method's impact.\", \"The authors tried to address the above points, but explanation on the computational complexity and scalability of HYPA-DBGNN compared to standard temporal GNNs was insufficient (actual runtime comparison and preprocessing cost remain unclear). Additionally, more experiments are required (e.g., experimental results on real datasets has giant confidence interval and scalability).\"]}", "{\"comment\": \"Thank you for the detailed reviews. We are grateful that the reviewers acknowledge the novelty and importance of our contribution. UJFa appreciates that our work considers \\u201can interesting problem with many industrial applications\\u201d. Y5VP highlights the \\u201cnovel approach\\u201d to introduce \\u201cDe Bruijn graphs into temporal graph analysis\\u201d and the \\u201cextensive experiments\\u201d conducted \\u201cto demonstrate the effectiveness of the proposed method\\u201d. Also, jhfT finds the \\u201cidea of augmenting the input graph [..] using a statistical null model [to] distinguish sequences beyond frequency [...] interesting\\u201d and emphasizes that our method \\u201coffers an interpretable way to identify unusual sequences in dynamic graphs\\u201d that leads to \\u201cimproved performance over baseline models on multiple empirical datasets\\u201d. We are also pleased that UJFa acknowledged that \\u201cthe paper is well organized and motivated\\u201d and that the experiments \\u201cappear to be carefully performed and well motivated\\u201d such that the results \\u201cprovide evidence that the method works\\u201d. We also thank the reviewers for the questions and suggestions which we address below.\\n\\n.\\n\\nEven though our method already shows \\u201cpromising performance gains\\u201d (UJFa) there is a deeper interest in the expressiveness of the GNN with inductive bias.\\n\\nWe demonstrate the enhanced expressivity with the synthetic data set. A certain class of patterns is encoded in one instance of the synthetic data set. The structure of this pattern is thoroughly explained in appendix I. As described, the second synthetic data set does not contain this pattern but the frequencies match the first one. In the evaluation, we show that no baseline method is able to improve the classification performance on the data set with pattern compared to the performance of the data set without patterns. Hence, they rely on the edge frequencies that are the same in both data sets. On the other hand our method is able to learn a perfect classification for the data set with pattern. Hence our method is able to express this pattern. \\n\\n.\\n\\nReviewer jhfT asked about the \\u201ccomplexity\\u201d and UJfa asked for the \\u201crun-time scaling\\u201d.\\n\\nIn appendix D, we included a theoretical analysis of the runtime of our method depending on the size of the temporal graph. This analysis actually shows that our model has reasonable runtime even for large data sets. These theoretical bounds are further corroborated by empirical evaluations in Ref. [45], which shows the scalability of the De Bruijn-based methods in larger data sets. They show, as also noted by receiver Y5P, that even though the De Bruijn Graph introduces new edges, it still retains sparsity in empirical data sets, making it no overly more dense than the graphs used by standard GNNs. The additional calculation of the HYPA scores requires a single traversal of the De Bruijn graph and it is done in a pre-processing step. The pre-processing takes only a negligible amount of time compared to the training of the model. To further remove this pre-processing, we provide an ablation study in appendices A and B that contain a simplified score that can be directly calculated dynamically during training.\\n\\n.\", \"we_comment_on_using_additional_baseline_models\": \"We agree that there is a range of methods for dynamic node property prediction in temporal graphs (TGAT, DySAT) and even a larger range for static node classification in static graphs (GAT, PINE, \\u2026). However, this work focuses on predicting static node properties, while using patterns in a temporal graph. This limits the choice of suitable baselines that can be used without making major adaptations to the architecture.\\n\\nHowever, we include baselines from those other domains to represent those methods. We observe that node classification methods for static tasks miss the temporal information leading to a worse performance. Even though there might be more sophisticated methods in that area they still miss out the crucial information by design.\\n\\nThe methods for dynamic node prediction incorporate temporal information but focus on dynamic changing node classes and not on higher-order dependencies. Hence, major adaptations need to be made to compare those methods. This is not trivial and deserves its own research which becomes visible by the question of LQNX for another implementation. The evaluation of different ideas, including the proposed one, with respect to the performance led to the presented version.\\n\\nTo conclude, we chose representative candidates from dynamic node prediction and static node classification approaches. What we show in our work is that because the task we propose is novel, and different from what these standard approaches were developed for, these models show weak performances. \\n\\n.\\n\\nWe address further individual reviewers suggestions in a direct response.\\n\\nWe thank the reviewers for their time and their careful examination of the notation that we revise in the camera ready version. Thanks to the reviewers, we will substantially improve our manuscript.\"}" ] }
0HqPwbN1Su
MLGLP: Multi-Scale Line-Graph Link Prediction based on Graph Neural Networks
[ "Manizheh Ranjbar", "Mahdi Jalili", "Xiaodong Li", "Parham Moradi DW" ]
This manuscript proposes a multi-scale link prediction approach based on Graph Neural Networks (GNNs). The proposed method - Multi-Scale Line-Graph Link Prediction (MLGLP) - learns the graph structure and extracts effective representative features of graph edges to address challenges such as information loss and handle multi-scale information. This approach utilizes embedding vectors generated by GNNs from enclosing subgraphs. While expanding GNN layers can capture more intricate relations, it often leads to overs-smoothing. To mitigate this issue, we propose constructing coarse-grained graphs at three distinct scales to uncover complex relations. To apply multi-scale subgraphs in GNNs without using pooling layers that lead to information loss, we convert each subgraph into a line-graph and reformulate the task as a node classification problem. The hierarchical structure facilitates exploration across various levels of abstraction, fostering deeper comprehension of the relationships and dependencies inherent within the graph. The proposed method is applied on link prediction problem, which can be modelled as a graph classification problem. We perform extensive experiments on several well-known benchmarks and compare the results with state-of-the-art link prediction methods. The experimental results demonstrate the superiority of our proposed model in terms of average precision and area under the curve.
[ "link prediction", "graph neural network", "multi-scale graph", "line graph", "complex network." ]
Reject
https://openreview.net/pdf?id=0HqPwbN1Su
https://openreview.net/forum?id=0HqPwbN1Su
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xQl0OyDuKK", "rBIxl1E8e3", "qZPa4ET34x", "o2WJPh8I9X", "mZvEuwT41m", "kAZadLjH7i", "fd5t1UYgP5", "fZZuVfWKvO", "PJdti7b8Bx", "JKJYDRjwe5", "HkxwXkrbsv", "DYHAxXWpuP", "AnR1KFDqW3", "97dJ6qsEd8", "0nkXxklfvu", "0c1yasTzld" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1733208653112, 1733209434366, 1732786817250, 1733207380002, 1732848527894, 1730625315943, 1730104050764, 1732785638085, 1730448850317, 1733209606573, 1734705875241, 1732863342394, 1733218686228, 1737524000999, 1732840334474, 1732790510227 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9712/Authors" ], [ "ICLR.cc/2025/Conference/Submission9712/Reviewer_LUFZ" ], [ "ICLR.cc/2025/Conference/Submission9712/Authors" ], [ "ICLR.cc/2025/Conference/Submission9712/Authors" ], [ "ICLR.cc/2025/Conference/Submission9712/Reviewer_hDSk" ], [ "ICLR.cc/2025/Conference/Submission9712/Reviewer_hDSk" ], [ "ICLR.cc/2025/Conference/Submission9712/Reviewer_LUFZ" ], [ "ICLR.cc/2025/Conference/Submission9712/Authors" ], [ "ICLR.cc/2025/Conference/Submission9712/Reviewer_SocH" ], [ "ICLR.cc/2025/Conference/Submission9712/Authors" ], [ "ICLR.cc/2025/Conference/Submission9712/Area_Chair_ZTfK" ], [ "ICLR.cc/2025/Conference/Submission9712/Reviewer_LUFZ" ], [ "ICLR.cc/2025/Conference/Submission9712/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9712/Reviewer_SocH" ], [ "ICLR.cc/2025/Conference/Submission9712/Authors" ] ], "structured_content_str": [ "{\"comment\": [\"We are pleased that our response has addressed your concerns, and we deeply appreciate the time and effort you have dedicated to reviewing our work.\", \"We have conducted additional experiments comparing NCNC[1] with our method, and the results demonstrate the superiority of our method compared to NCNC. The results are as follows:\", \"$\\\\textbf{Comparison Results}$\", \"$\\\\textbf{Cora dataset}$\", \"$NCNC$\", \"$Node Feature$: AUC = 95.72\\\\%, AP = 95.89\\\\%\", \"Random Node Feature: $\\\\textbf{AUC = 70.78 \\\\\\\\% }$, $\\\\textbf{AP = 75.16 \\\\\\\\% }$\", \"Onehot-degree-node - Node Feature: AUC = 85.06\\\\%, AP = 88.34\\\\%\", \"$MLGLP$: $\\\\textbf{AUC = 95.79\\\\\\\\%}$, $\\\\textbf{AP = 96.23\\\\\\\\%}$\", \"$\\\\textbf{NSC dataset}$\", \"$NCNC$\", \"Random Node Feature: AUC = 59.76 \\\\%, AP = 56.87\\\\%\", \"Onehot-degree-node - Node Feature: $\\\\textbf{AUC = 95.39\\\\\\\\%}$, $\\\\textbf{AP = 96.95\\\\\\\\%}$\", \"$MLGLP$: $\\\\textbf{AUC = 99.68\\\\\\\\%}$, $\\\\textbf{AP = 99.89\\\\\\\\%}$\", \"$\\\\textbf{USAir dataset}$\", \"$NCNC$\", \"Random Node Feature: AUC = 56.30\\\\%, AP = 53.57\\\\%\", \"Onehot-degree-node - Node Feature: $\\\\textbf{AUC = 96.88\\\\\\\\%}$, $\\\\textbf{AP = 96.24\\\\\\\\%}$\", \"$MLGLP$: $\\\\textbf{AUC = 98.31\\\\\\\\%}$, $\\\\textbf{AP = 98.28\\\\\\\\%}$\", \"$\\\\textbf{Router dataset}$\", \"$NCNC$\", \"Random Node Feature: AUC = 72.26\\\\%, AP = 68.78\\\\%\", \"Onehot-degree-node - Node Feature: $\\\\textbf{AUC = 96.07\\\\\\\\%}$, $\\\\textbf{AP = 96.26\\\\\\\\%}$\", \"$MLGLP$: $\\\\textbf{AUC = 99.11\\\\\\\\%}$, $\\\\textbf{AP = 99.20\\\\\\\\%}$\", \"$\\\\textbf{Advantages of MLGLP}$\", \"$\\\\textbf{Independence from Node Features:}$ MLGLP does not require node attributes, making it highly effective in featureless settings or when node features are limited\", \"$\\\\textbf{Inference time Efficiency: }$ By focusing on localized subgraphs, MLGLP avoids the high latency of full-graph message passing, resulting in faster inference.\", \"$\\\\textbf{Task-Specific Design:}$ MLGLP captures pairwise structural relationships through h-hop enclosing subgraphs, making it well-suited for link prediction tasks, unlike NCNC, which may miss such dependencies.\", \"We hope these new experiments provide additional insight and further validate our approach, leading to a positive reevaluation of our work.\"], \"reference\": \"[1] Wang X, Yang H, Zhang M. Neural common neighbor with completion for link prediction. arXiv preprint arXiv:2302.00890. 2023 Feb 2.\"}", "{\"title\": \"Reply to Authors\", \"comment\": \"Thank you for your reply. I still have four concerns:\\n1. In the case of datasets with existing features, such as Cora, please explain why you still use Random Node Feature and Onehot-degree-node - Node Feature instead of just Node Feature. \\n2. Why don't you add Citeseer's results to the comparison in Section 5.2, but only use Citeseer in the ablation experiment? \\n3. In the experiment in Section 5.2, what feature creation method did you use for GNN for datasets without existing features? This is not explained in the paper.\\n4. In the NCNC paper, most of the datasets they used are different from the datasets you used in this paper. How do you judge that PEG and BUDDY are still inferior to NCNC in different scenarios?\"}", "{\"comment\": \"We truly value your insightful feedback, which has played a crucial role in enhancing and refining our work. Below, we offer detailed responses to address the concerns you raised in the Weaknesses and Questions section.\\n\\n$\\\\textbf{(W1-Q1):}$ The performance disparity between MLGLP and LGLP in the early epochs, as observed in both training loss and AUC, can be explained by several potential factors:\\n\\n$\\\\textbf{Complexity and Feature Length:}$ The three-scale architecture in MLGLP increases the feature size (three times that of LGLP), leading to a higher-dimensional input space. While this enhances expressiveness, it can cause slower optimization in the initial epochs due to the larger number of parameters to adjust and the need for the model to learn to utilize the multi-scale features effectively.\\n\\n$\\\\textbf{Regularization and Generalization:}$ The additional multi-scale structure introduces a form of implicit regularization, which might slow early training but results in better generalization as training progresses. This delayed payoff is a trade-off, as demonstrated by the superior performance of MLGLP in later epochs and final metrics.\\n\\nI would argue that while LGLP performs marginally better in early epochs, MLGLP's multi-scale design offers several compelling advantages:\\n\\n$\\\\textbf{Higher Expressiveness:}$ Multi-scale representations allow MLGLP to capture both global and local graph structures simultaneously, uncovering complex relationships that LGLP's single-scale approach cannot handle. This explains why MLGLP consistently outperforms LGLP in terms of final AUC and training loss.\\n\\n$\\\\textbf{Broader Applicability:}$ Multi-scale approaches are better suited for diverse datasets with varying structural characteristics, making MLGLP more robust and generalizable across different graph types.\\n\\n$\\\\textbf{Experimental Results:}$ Despite slower initial improvement, MLGLP achieves better final performance across all metrics (AUC, training loss). This highlights its superiority in learning richer and more accurate graph representations.\\nWhile LGLP plateaus early, MLGLP continues to improve steadily throughout the training process. This suggests that MLGLP learns more meaningful representations over time, justifying its increased complexity.\\n\\n\\n\\n$\\\\textbf{(W2-Q2)}:$ Figure 5 presents the t-SNE visualizations for our proposed method, showcasing the results and demonstrating that the features learned by our model can be easily classified. However, we acknowledge that the figure currently lacks comparisons with state-of-the-art (SoTA) methods. To address this, we have added additional visualizations in the appendix, which compare MLGLP\\u2019s clustering results with those of SoTA methods such as LGLP and SEAL. These additional comparisons provide a more comprehensive assessment of MLGLP\\u2019s performance and ensure a fair and direct comparison with existing methods.\\n\\nThank you once again for your valuable suggestion. We will incorporate these updates in the revised paper to more effectively demonstrate the efficacy of MLGLP in relation to current approaches.\\n\\n\\n$\\\\textbf{(W3-Q3)}:$ Thank you for your feedback. In response to your comment, we have made the necessary modifications. Specifically, we have addressed the presentation issues, including the dangling \\\"However\\\" above Table 3 in Section 6. We have corrected this and ensured that the text flows more smoothly and clearly.\"}", "{\"comment\": [\"We have conducted additional experiments comparing NCNC[1] with our method (MLGLP), and the results demonstrate the superiority of our method compared to NCNC. We found that NCNC is highly sensitive to node features, which affects its performance in scenarios with limited or absent node attributes. The results are as follows:\", \"$\\\\textbf{Comparison Results}$\", \"$\\\\textbf{Cora}$\", \"$NCNC$\", \"$Node Feature$: AUC = 95.72\\\\%, AP = 95.89\\\\%\", \"Random Node Feature: $\\\\textbf{AUC = 70.78 \\\\\\\\% }$, $\\\\textbf{AP = 75.16 \\\\\\\\% }$\", \"Onehot-degree-node - Node Feature: AUC = 85.06\\\\%, AP = 88.34\\\\%\", \"$MLGLP$: $\\\\textbf{AUC = 95.79\\\\\\\\%}$, $\\\\textbf{AP = 96.23\\\\\\\\%}$\", \"$\\\\textbf{NSC}$\", \"$NCNC$\", \"Random Node Feature: AUC = 59.76 \\\\%, AP = 56.87\\\\%\", \"Onehot-degree-node - Node Feature: $\\\\textbf{AUC = 95.39\\\\\\\\%}$, $\\\\textbf{AP = 96.95\\\\\\\\%}$\", \"$MLGLP$: $\\\\textbf{AUC = 99.68\\\\\\\\%}$, $\\\\textbf{AP = 99.89\\\\\\\\%}$\", \"$\\\\textbf{USAir}$\", \"$NCNC$\", \"Random Node Feature: AUC = 56.30\\\\%, AP = 53.57\\\\%\", \"Onehot-degree-node - Node Feature: $\\\\textbf{AUC = 96.88\\\\\\\\%}$, $\\\\textbf{AP = 96.24\\\\\\\\%}$\", \"$MLGLP$: $\\\\textbf{AUC = 98.31\\\\\\\\%}$, $\\\\textbf{AP = 98.28\\\\\\\\%}$\", \"$\\\\textbf{Router}$\", \"$NCNC$\", \"Random Node Feature: AUC = 72.26\\\\%, AP = 68.78\\\\%\", \"Onehot-degree-node - Node Feature: $\\\\textbf{AUC = 96.07\\\\\\\\%}$, $\\\\textbf{AP = 96.26\\\\\\\\%}$\", \"$MLGLP$: $\\\\textbf{AUC = 99.11\\\\\\\\%}$, $\\\\textbf{AP = 99.20\\\\\\\\%}$\", \"$\\\\textbf{Advantages of MLGLP}$\", \"$\\\\textbf{Independence from Node Features}$\", \"$\\\\textbf{Inference time Efficiency}$\", \"$\\\\textbf{Task-Specific Design}$\", \"As mentioned in the NCNC paper[1], NCNC outperforms PEG and BUDDY. Therefore, I have compared our method to NCNC. We will include detailed comparisons in the camera-ready version to further illustrate these distinctions and validate our method against NCNC, PEG, and BUDDY.\", \"We hope this clarification addresses your concerns and would greatly appreciate it if you could reevaluate our contribution in light of these new results.\", \"Thank you for your time and consideration.\"], \"reference\": \"[1] Wang X, Yang H, Zhang M. Neural common neighbor with completion for link prediction. arXiv preprint arXiv:2302.00890. 2023 Feb 2.\"}", "{\"comment\": \"Thanks to the author's reply, most of my problems were solved and I decided to raise my score to 5.\"}", "{\"summary\": \"This manuscript presents Multi-Scale Line-Graph Link Prediction (MLGLP), a multi-scale link prediction method using Graph Neural Networks (GNNs). MLGLP learns graph structures and extracts edge features to address information loss and capture complex relationships. By constructing coarse-grained graphs at three scales and converting subgraphs into line graphs, it reformulates the task as node classification. Extensive experiments on benchmark datasets demonstrate that MLGLP outperforms state-of-the-art link prediction methods in average precision and area under the curve.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method demonstrates excellent performance, significantly improving results across various datasets.\\n2. The approach appears to be straightforward to follow.\", \"weaknesses\": \"1. There are some notation issues; the model name is inconsistently defined throughout the paper, sometimes referred to as MLGLP, other times as MSGL or MSLGLP (as noted in the caption of Table 3), and occasionally as MSLG (in section 5.1). Additionally, the tables display varying levels of decimal precision (sometimes three decimal places, sometimes two), which should be standardized.\\n2. There are concerns regarding baseline comparisons. For instance, the AP of GAE on the Cora dataset should be significantly higher than that of GCN based on the original paper, yet the authors report it being lower by ten points in their experiments, which needs to be explained to maintain credibility.\\n3. The method involves sampling subgraphs, converting them to line graphs, and then performing node classification, which appears to result in high time complexity. Although the authors analyze time complexity, the discussion is not in-depth. They should compare it with the time complexity of two other subgraph-based methods and also include training time comparisons.\\n4. The baselines compared in the paper seem somewhat outdated; for example, reference [1] proposes a line graph-based method for link prediction.\\n5. The core innovation of this paper appears to be the application of multi-scale and line graph concepts to link prediction tasks. However, the paper lacks ablation studies on these two components, such as whether the line graph contributes to performance improvement, and it does not compare the final concatenation method. This makes it difficult to ascertain the key factors driving the model's improved performance.\\n[1]Zhang Z, Sun S, Ma G, et al. Line graph contrastive learning for link prediction[J]. Pattern Recognition, 2023, 140: 109537.\", \"questions\": \"1. What explanation can the authors provide for the discrepancy in AP values between GAE and GCN on the Cora dataset?\\n2. Could the authors offer a more detailed analysis of the time complexity of their method compared to other subgraph-based approaches?\\n3. Can the authors conduct ablation studies to assess the individual contributions of the multi-scale and line graph components to the overall performance of the model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed a link prediction method named Multi-Scale line-graph Link Prediction (MLGLP). MLGLP used three scales to capture information at a different level of granularity. The link prediction problem is defined as a node classification problem on a line graph, which facilitates a deeper understanding of relationships within the graph. Experiments conducted on several benchmark datasets validated the effectiveness of MLGLP.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors use multi-scale subgraphs for link prediction to capture graph information at different granularities. The approach is interesting.\\n2. The authors transform the link prediction problem into a node classification problem on a line graph, which better addresses the issue of link representation.\\n3. The proposed method outperforms existing methods in the experiments.\", \"weaknesses\": \"1. This paper appears to be an unfinished draft, containing many textual errors (e.g., the beginning of line 349 lacks capitalization, and line 372 is missing a period) and missing sentences (e.g., line 510).\\n2. The baselines chosen in this paper are outdated.\\n3. The paper lacks novelty, as the proposed module is superficial and easy to conceive. The proposed method for converting to line graph is very similar to LGLP.\", \"questions\": \"1. There are many papers on link prediction in line graphs; can you explain what distinguishes MLGLP from them?\\n2. It's necessary to use the latest methods as baselines, such as BUDDY[1], NCNC[2], and PEG[3].\\n3. I have doubts about the visualization results; in Figure 5, the blue and red points seem to overlap, indicating that MLGLP cannot distinguish between positive and negative samples. An explanation is needed.\", \"reference\": \"[1] Chamberlain BP, Shirobokov S, Rossi E, Frasca F, Markovich T, Hammerla N, Bronstein MM, Hansmire M. Graph neural networks for link prediction with subgraph sketching. arXiv preprint arXiv:2209.15486. 2022 Sep 30.\\n[2] Wang X, Yang H, Zhang M. Neural common neighbor with completion for link prediction. arXiv preprint arXiv:2302.00890. 2023 Feb 2.\\n[3] Wang H, Yin H, Zhang M, Li P. Equivariant and stable positional encoding for more powerful graph neural networks. arXiv preprint arXiv:2203.00199. 2022 Mar 1.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate your thoughtful feedback, which has been invaluable in refining and improving our work. Below, we provide detailed responses to address the concerns you raised in Weaknesses (W) and Questions (Q).\\n\\n$\\\\textbf{(W1)}$: All the mentioned issues are now addressed and corrected in the revised version\\n\\n\\n$\\\\textbf{(W2)}$: We appreciate the reviewer highlighting the discrepancy in the Average Precision (AP) of GAE on the Cora dataset compared to the results reported in the original paper.\\n\\nAfter revisiting our experiments and methodology, we would like to clarify the following points to address this concern:\\nThe observed discrepancy may stem from differences in implementation details, hyperparameter tuning, or data preprocessing. While we endeavored to align closely with the original GAE paper, minor variations in hyperparameters, preprocessing steps, or evaluation splits could have contributed to the difference in AP.\\nTo address this issue transparently and maintain credibility, we will Include additional details about our experimental setup in the final version of the paper.\\n\\n$\\\\textbf{(W3)}$: We appreciate your feedback and will enhance the time complexity analysis, as well as include training time comparisons in the revised paper.\\n\\n$\\\\textbf{(W4)}$: Thank you for your feedback. AA is a foundational baseline for link prediction, and while simple, it provides valuable comparison. We also include modern subgraph-based methods for a comprehensive evaluation.\\n\\n$\\\\textbf{(W5)}$: Thank you for pointing out the lack of ablation studies in our paper. We indeed have conducted an ablation study to compare the effect of multi-scale and line graph components $(Table 3)$. In our study, $Scale-1$ corresponds to only using a line graph component, and SEAL is used when only the first scale is applied, without the line graph transformation. We will revise the paper to include this clarification and correct the notation to reflect these details more clearly.\\n\\nWe appreciate your suggestion and will ensure that the revised version explicitly describes the ablation study and compares the final concatenation method to help readers better understand the contributions of each component.\\n\\n$\\\\textbf{(Q1)}$: As noted in the $\\\\textbf {PEG[1]}$ paper, the performance of GAE is highly dependent on the input features used for training. For example, Table 1 of PEG demonstrates that the performance of VGAE (a variant of GAE) on the Cora dataset can vary significantly depending on the input features, with AP $\\\\textbf{ scores ranging from 55.68 to 89.89}$. This underscores the sensitivity of GAE-based methods to feature selection, which may lead to performance variations across different experimental setups.\\n\\n$\\\\textbf{(Q2)}$: As mentioned in $\\\\textbf {[2]}$, GNNs face significant inference latency due to graph dependencies, scaling with $O(RL)$, where $R$ is the graph's average degree and $L$ is the number of layers. \\n\\nNode-based methods like GAE and CGN have higher inference times compared to subgraph-based methods, which focus on extracted subgraphs, reducing latency.\", \"when_compared_to_subgraph_based_methods\": \"- SEAL operates directly on subgraphs without line graph conversion, resulting in lower time complexity than our method.\\n- LGLP includes subgraph extraction and line graph conversion, sharing a similar time complexity to our approach, though ours is slightly higher due to additional processing.\\n\\nHowever, our method achieves higher accuracy by mitigating information loss inherent in SEAL and LGLP during subgraph extraction or line graph conversion. Thus, while SEAL and LGLP are faster, our method balances slightly higher complexity with improved accuracy.\\n\\n$\\\\textbf{(Q3)}$: Thank you for the suggestion. We have conducted ablation studies to evaluate the individual contributions of the multi-scale and line graph components to the overall performance. \\nThe results are summarized in Table 3, where 'Scale-1' corresponds to the line graph component, and the other scales represent different structural levels within the multi-scale framework.\\n\\nFrom the results, it is evident that each scale contributes uniquely to the model's performance. For instance, 'Scale-1' (line graph) demonstrates strong results on datasets like NSC and Router, achieving high AP and AUC scores. However, combining all scales ('All' method) consistently yields the best performance across most datasets, such as USAir and Celegans, confirming the effectiveness of the multi-scale approach. This highlights the importance of capturing diverse structural patterns for robust graph representation.\", \"reference\": \"[1] Wang H, Yin H, Zhang M, Li P. Equivariant and stable positional encoding for more powerful graph neural networks. arXiv preprint arXiv:2203.00199. 2022.\\n\\n[2] Zhang, S., Liu, Y., Sun, Y., & Shah, N. Graph-less Neural Networks: Teaching Old MLPs New Tricks Via Distillation. In Proceedings of the International Conference on Learning Representations, 2022.\"}", "{\"summary\": \"The paper tackles oversmoothing in Graph Neural Networks by proposing the use of coarse-grained graphs at three scales to capture complex relationships. Instead of pooling layers, the authors convert subgraphs into line-graphs and reformulate the task as node classification, enhancing the exploration of relationships. Applied to link prediction as a graph classification problem, the method shows superior performance over existing methods in terms of average precision and area under the curve in extensive benchmark tests.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper introduces Multi-Scale Line-graph Link Prediction (MLGLP), a GNN approach that learns graph structures and features from edges, tackling information loss and multi-scale challenges.\", \"The method constructs coarse-grained graphs at three scales to uncover complex data relationships and converts them into line graph representations, allowing for node embedding learning and reformulating link prediction as node classification.\", \"Experimental results show significant performance improvements over heuristics, embeddings, and various GNN link prediction methods.\"], \"weaknesses\": [\"The comparison of training loss and AUC among LGLP, SEAL, and MLGLP demonstrates improved loss for MLGLP relative to the baselines, yet it remains unclear why MLGLP performs weaker than LGLP in the early epochs. Further clarification on this aspect would enhance the analysis.\", \"Figure 5 provides valuable visual insights; however, it lacks comparisons with state-of-the-art (SoTA) methods, hindering a fair assessment of MLGLP's performance. The authors should clarify whether the identified clusters correspond to meaningful patterns and provide an experimental analysis to support this.\", \"There are several presentation issues that require careful proofreading. For instance, Section 6 contains a dangling \\\"However\\\" above Table 3 that should be addressed.\"], \"questions\": \"See the weakness feedback\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We sincerely appreciate the time you\\u2019ve taken to review our work. We are glad that the additional information and clarifications addressed the concerns raised.\", \"We have conducted additional experiments comparing NCNC[1] with our method, and the results demonstrate the superiority of our method compared to NCNC. The results are as follows:\", \"$\\\\textbf{Comparison Results}$\", \"$\\\\textbf{Cora dataset}$\", \"$NCNC$\", \"$Node Feature$: AUC = 95.72\\\\%, AP = 95.89\\\\%\", \"Random Node Feature: $\\\\textbf{AUC = 70.78 \\\\\\\\% }$, $\\\\textbf{AP = 75.16 \\\\\\\\% }$\", \"Onehot-degree-node - Node Feature: AUC = 85.06\\\\%, AP = 88.34\\\\%\", \"$MLGLP$: $\\\\textbf{AUC = 95.79\\\\\\\\%}$, $\\\\textbf{AP = 96.23\\\\\\\\%}$\", \"$\\\\textbf{NSC dataset}$\", \"$NCNC$\", \"Random Node Feature: AUC = 59.76 \\\\%, AP = 56.87\\\\%\", \"Onehot-degree-node - Node Feature: $\\\\textbf{AUC = 95.39\\\\\\\\%}$, $\\\\textbf{AP = 96.95\\\\\\\\%}$\", \"$MLGLP$: $\\\\textbf{AUC = 99.68\\\\\\\\%}$, $\\\\textbf{AP = 99.89\\\\\\\\%}$\", \"$\\\\textbf{USAir dataset}$\", \"$NCNC$\", \"Random Node Feature: AUC = 56.30\\\\%, AP = 53.57\\\\%\", \"Onehot-degree-node - Node Feature: $\\\\textbf{AUC = 96.88\\\\\\\\%}$, $\\\\textbf{AP = 96.24\\\\\\\\%}$\", \"$MLGLP$: $\\\\textbf{AUC = 98.31\\\\\\\\%}$, $\\\\textbf{AP = 98.28\\\\\\\\%}$\", \"$\\\\textbf{Router dataset}$\", \"$NCNC$\", \"Random Node Feature: AUC = 72.26\\\\%, AP = 68.78\\\\%\", \"Onehot-degree-node - Node Feature: $\\\\textbf{AUC = 96.07\\\\\\\\%}$, $\\\\textbf{AP = 96.26\\\\\\\\%}$\", \"$MLGLP$: $\\\\textbf{AUC = 99.11\\\\\\\\%}$, $\\\\textbf{AP = 99.20\\\\\\\\%}$\", \"$\\\\textbf{Advantages of MLGLP}$\", \"$\\\\textbf{Independence from Node Features:}$ MLGLP does not require node attributes, making it highly effective in featureless settings or when node features are limited\", \"$\\\\textbf{Inference time Efficiency: }$ By focusing on localized subgraphs, MLGLP avoids the high latency of full-graph message passing, resulting in faster inference.\", \"$\\\\textbf{Task-Specific Design:}$ MLGLP captures pairwise structural relationships through h-hop enclosing subgraphs, making it well-suited for link prediction tasks, unlike NCNC, which may miss such dependencies.\", \"We hope that these results will contribute to the reevaluation of our work. Thank you again for your time and consideration.\"], \"reference\": \"[1] Wang X, Yang H, Zhang M. Neural common neighbor with completion for link prediction. arXiv preprint arXiv:2302.00890. 2023 Feb 2.\"}", "{\"metareview\": \"This paper proposes to incorporate multi-scale graph representations and transform link prediction into a node classification problem on line graphs. The core innovation, utilizing multi-scale subgraphs and line graphs for link prediction, lacks sufficient novelty. The approach is conceptually similar to existing methods such as LGLP, with incremental improvements rather than groundbreaking advancements. Besides, the choice of baselines is outdated, and the justification for omitting recent methods such as PEG, BUDDY, and NCNC is insufficient. While comparisons to NCNC were later added, they rely on indirect evaluations rather than comprehensive experimental results across shared datasets. Important datasets like Citeseer are inconsistently reported, with their results included in ablation studies but not in main comparisons. Additionally, the feature creation methods for GNNs on datasets without node features are not clearly explained, leaving critical gaps in the experimental setup. While the paper explores a promising direction and demonstrates some empirical improvements, it fails to meet the bar for originality, rigor, and clarity expected at ICLR. Strengthening the method\\u2019s theoretical contributions, providing thorough comparisons with up-to-date baselines, and significantly improving the presentation will enhance the paper\\u2019s impact in future submissions.\", \"additional_comments_on_reviewer_discussion\": \"While the authors addressed several review concerns, some responses were incomplete or lacked sufficient evidence. For example: The claim of MLGLP's superior performance in featureless settings was not robustly supported with experiments on newer baselines.\\nThe explanation of overlapping clusters in t-SNE visualizations remains unconvincing, casting doubt on the model's ability to differentiate positive and negative samples effectively.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your reply. Since you still haven't solved my doubt (Q2), I will keep my score.\"}", "{\"comment\": \"Thank you for your thoughtful questions and for giving us the opportunity to clarify these points. Below are our responses to your concerns:\\n\\n$\\\\textbf{1}$: To evaluate robustness and generalization, we tested on the Cora dataset using not only the original Node Features but also Random Node Features and Onehot-degree-node Features. For comparison, $\\\\textbf{ NCNC using Node Features achieved an AUC of 95.72\\\\\\\\% and an AP of 95.89\\\\\\\\%}$.\", \"these_additional_experiments_aim_to\": \"- Assess sensitivity to the absence of meaningful node features.\\n- Baseline Comparisons: Random Node Features establish a baseline, allowing us to determine how much of the performance depends on the network's structural properties rather than the node attributes.\\n- Evaluating Structural Information: Onehot-degree-node Features provide a simple structural representation, enabling us to study how well the methods leverage graph topology independently of node attributes.\\n \\n \\n \\n$\\\\textbf{2}$: We did perform a comparison on the Citeseer dataset and included its results in the ablation experiment to highlight specific aspects of our method. However, due to limited space in the table, we could not include Citeseer\\u2019s results in the main comparison.$\\\\textbf{To address your comment, we will add the Citeseer results to all tables in the next revision}$.\\n\\n$\\\\textbf{3}$: $\\\\textbf{To address this comment, we will add the settings in the next revision}$.\\nThe methods LGLP and SEAL rely on subgraph structures rather than node features. However, for GCN and GAT, we use random features to evaluate their performance in the absence of meaningful node attributes.\\n\\n$\\\\textbf{4}$: \\nWe completely agree with you; to perform a fair comparison, we need to include BUDDY, PEG, and NCNC in our experiments. Unfortunately, we do not have enough time to compare PEG, BUDDY, and NCNC across all datasets at this time. Our judgment was based on the report in the NCNC paper, which indicates that NCNC outperforms BUDDY and PEG.\\n\\n$\\\\textbf{To address your comment, we will conduct experiments to compare MLGLP with PEG and BUDDY in the next revision.}$\\n\\nIn all datasets used by NCNC, PEG, and BUDDY, node features are present. These methods rely on message-passing mechanisms, which makes them highly sensitive to the availability and quality of node features. As a result, their performance improves significantly when node features are available. However, in real-world applications, node features are not always accessible, limiting the applicability of these methods.\\n\\nFurthermore, as highlighted in $\\\\textbf{Table 1}$ of the $\\\\textbf{PEG}$ paper, PEG evaluates its sensitivity to different types of node features. For instance, on the $\\\\textbf{Cora}$ dataset, PEG achieves a best $\\\\textbf{AUC of 90.78 without node features}$, compared to $\\\\textbf{94.20 with node features}$. In contrast, our $\\\\textbf{MLGLP}$ achieves an $\\\\textbf{AUC of 95.79}$, demonstrating superior performance even in the absence of node features\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you to the authors for providing additional information and clarification in the rebuttal. The improvements made in response to the concerns raised have clearly enhanced the readability of the manuscript and addressed several key issues, making the paper easier to follow. As a result, I have revised the relevant scores to reflect these positive changes. However, considering the novelty of the work and the relatively limited improvement in overall performance, I believe it is appropriate to maintain the original overall rating.\"}", "{\"comment\": \"We are truly grateful for your insightful feedback, which has played a crucial role in enhancing and refining our work. In the following, we provide comprehensive responses to address the concerns and questions you raised under Weaknesses (W) and Questions (Q).\\n\\n$\\\\textbf{(W1):}$ Thank you for your helpful feedback. To address this comment, we have carefully reviewed the paper and have corrected all the textual errors.\\n\\n$\\\\textbf{(W2):}$ Thank you for your feedback on the baselines. In the revised version, we will update the baselines to include more recent and relevant methods to better reflect the current state of the field.\\n\\n$\\\\textbf{(W3):}$ Thank you for your feedback. While the concept of converting to a line graph may appear similar to LGLP, our method, unlike LGLP, which focuses on a single scale, incorporates a multi-scale framework. This enables it to capture richer and more diverse structural information across various scales of the graph. Additionally, as highlighted in the paper, our approach outperforms existing methods like LGLP in terms of accuracy.\\n\\n$\\\\textbf{(Q1):}$ Thank you for highlighting the numerous works on link prediction in line graphs. While existing methods have contributed significantly to the field, our proposed MLGLP (Multi-Scale Line Graph Link Prediction) framework offers key distinctions:\\n\\nIt is a $\\\\textbf{Multi-Scale Framework}$, Unlike traditional methods that focus on a single level of representation, MLGLP incorporates multiple scales to capture both local and global structural patterns in the line graph. This enables a more comprehensive understanding of the graph's topology, resulting in improved link prediction performance across diverse datasets.\\n\\nMLGLP leverages rich structural patterns from different scales to maintain robust performance. As shown in our ablation studies, each scale contributes uniquely to the model's robustness, outperforming single-scale methods in capturing nuanced link dependencies.\\n\\n$\\\\textbf{(Q2):}$ Thank you for the suggestion regarding the inclusion of recent baselines like BUDDY, NCNC, and PEG.\\nThank you for your valuable feedback. In the revised version of the paper, we will certainly include a comparison of MLGLP alongside other methods such as PEG, BUDDY, and NCNC. We are confident that these additions will help validate the efficacy of MLGLP, and we will ensure these clarifications are incorporated in the revision.\\nIt is important to highlight that, as mentioned in Table 1 of the paper $\\\\textbf{[1]}$, for $\\\\textbf{Cora}$, the best $\\\\textbf{AUC}$ for $\\\\textbf{PEG}$ without using node features is $\\\\textbf{90.78 \\u00b1 0.09}$, while for $\\\\textbf{MLGLP}$, the AUC is $\\\\textbf{95.79}$, as highlighted in $\\\\textbf{Table 7}$ of the appendix. These results demonstrate the significant improvement of MLGLP over existing methods, including PEG, in terms of performance. \\n\\nWe chose to compare MLGLP with LGLP as a baseline for the following reasons, despite other methods being noteworthy in the field.\\n\\n1. LGLP is relevant to our method as it was a leading approach in GNN-based link prediction when our work began. Both LGLP and MLGLP focus on direct edge representations using localized subgraphs, making them conceptually similar. In contrast, methods like PEG, Buddy, and NCNC rely on node embeddings for edge prediction, making them less directly comparable to MLGLP.\\n\\n2. Subgraph-based methods like LGLP, MLGLP, and SEAL have the advantage of localized computation, making them more efficient during inference compared to whole-graph approaches like PEG [2]. Including methods with similar computational paradigms ensures a fair evaluation.\\n\\n3.LGLP, MLGLP, and SEAL explicitly generate edge-specific embeddings, which are better suited for link prediction tasks. In contrast, PEG, Buddy, and NCNC produce node embeddings and infer edge relationships indirectly, which might limit their performance for certain edge-centric tasks.\\n\\n$\\\\textbf{(Q3):}$ Thank you for your insightful question. The overlap in the t-SNE visualizations (Figure 5) does not indicate poor performance in link prediction. t-SNE, a dimensionality reduction technique, may not always preserve local structures, leading to some overlap even when the model performs well. To address this, we added t-SNE results for LGLP and SEAL in the appendix. Further evidence of MLGLP's effectiveness is shown in the line graph link prediction task, where our method outperforms the baselines in classification accuracy and AUC, despite the overlap in the t-SNE plot. This demonstrates that MLGLP effectively distinguishes between positive and negative links in a higher-dimensional space.\", \"references\": \"[1] Wang H, Yin H, Zhang M, Li P. Equivariant and stable positional encoding for more powerful graph neural networks. arXiv preprint arXiv:2203.00199. 2022.\\n[2] Zhang, S., Liu, Y., Sun, Y., & Shah, N. Graph-less Neural Networks: Teaching Old MLPs New Tricks Via Distillation. In Proceedings of the ICLR, 2022.\"}" ] }
0HWAbWgI3T
A Geometric Approach to Personalized Recommendation with Set-Theoretic Constraints Using Box Embeddings
[ "Shib Sankar Dasgupta", "Michael Boratko", "Andrew McCallum" ]
Personalized item recommendation typically suffers from data sparsity, which is most often addressed by learning vector representations of users and items via low-rank matrix factorization. While this effectively densifies the matrix by assuming users and movies can be represented by linearly dependent latent features, it does not capture more complicated interactions. For example, vector representations struggle with set-theoretic relationships, such as negation and intersection, e.g. recommending a movie that is “comedy and action, but not romance”. In this work, we formulate the problem of personalized item recommendation as matrix completion where rows are set-theoretically dependent. To capture this set-theoretic dependence we represent each user and attribute by a hyperrectangle or box (i.e. a Cartesian product of intervals). Box embeddings can intuitively be understood as trainable Venn diagrams, and thus not only inherently represent similarity (via the Jaccard index), but also naturally and faithfully support arbitrary set-theoretic relationships. Queries involving set-theoretic constraints can be efficiently computed directly on the embedding space by performing geometric operations on the representations. We empirically demonstrate the superiority of box embeddings over vector-based neural methods on both simple and complex item recommendation queries by up to 30% overall.
[ "Box Embeddings", "Personalized Query", "Set-based embeddings", "Recommendation" ]
https://openreview.net/pdf?id=0HWAbWgI3T
https://openreview.net/forum?id=0HWAbWgI3T
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zp0vvcOh6l", "x3NgAhFpTI", "tiA2EkBJV4", "t3K5gTj9je", "qtRoKJY22g", "pJiiJniB7E", "jaqzfR3vxt", "gsR8xRKOeB", "fUrOvDYvZs", "ajFTwyG00y", "WbcdszFQGx", "UWDpw2vYbQ", "UGd25yGdM9", "TWJyBSvvtM", "LsBi2gRS3b", "LiOivRwx2T", "J9a2vW4Jld", "BYZes6FwQs", "B9bIeplibL", "AraFiAyjqp", "7f6cIoabNt", "6gqGjih9nO" ], "note_type": [ "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1731967878909, 1733032522231, 1737981122345, 1731953968642, 1731950454361, 1731958261911, 1731956364162, 1731963029677, 1731961966265, 1731971570697, 1730528432349, 1731968934163, 1731950285791, 1731966158539, 1733032706634, 1730521163960, 1732504103047, 1732512648840, 1733165228696, 1731971676702, 1732430989202, 1730375528598 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Reviewer_Vvma" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Reviewer_6WMm" ], [ "ICLR.cc/2025/Conference/Submission3281/Reviewer_Vvma" ], [ "ICLR.cc/2025/Conference/Submission3281/Reviewer_6WMm" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Authors" ], [ "ICLR.cc/2025/Conference/Submission3281/Reviewer_TC3c" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal by the authors\", \"comment\": \"Thank you for your constructive feedback! While we address your specific questions and concerns in the following rebuttal, we encourage you to review our general response, which highlights common themes regarding the scope of our study and ambiguity in notations.\\n\\n---\\n\\n> The scope of this study is on a rather limited application, which the authors called \\\"attribute-specific query recommendation\\\".\\n\\nWhile it is true that our study focuses on \\\"attribute-specific query recommendation,\\\" this is a deliberate choice, as it addresses an unexplored and important problem within the broader landscape of recommendation systems. The ability to handle queries involving attributes and their logical combinations is critical for real-world applications, ranging from personalized e-commerce searches like \\u201cred winter coats under $100\\u201d to specific media preferences such as \\u201cdark comedies.\\u201d These scenarios highlight the need for systems that can handle both attribute-specific constraints and individual user preferences\\u2014an area where existing systems are limited. we see our focused contribution as the first step toward addressing an underexplored area with substantial real-world implications. \\n\\nWe also request the reviewer to refer to the \\u201cScope of our Work\\u201d section in the \\u201cGeneral Response.\\u201d\\n\\n---\\n\\n> The current manuscript lacks related work on \\\"attribute-specific query recommendation\\\". \\n> (C1) To my understanding, \\\"attribute-specific query recommendation\\\" is the task where (1) item attribute values are partially observed and often missing, and (2) in the prediction phase, the positive items are conditioned not only on a user but also on a boolean query. The problem of (1) has been addressed in existing studies (e.g., [b,c]). I am not familiar with (2) in the context of item recommendation, but there might be existing research on it. A discussion/comparison of existing studies on these points would make it easier to understand the novelty of this work.\\n\\nWe appreciate your feedback highlighting the lack of discussion around \\\"context-aware\\\" and \\\"attribute-aware\\\" recommendations. Including this discussion would indeed help situate our work more clearly within the broader context of related research in this domain.\\n\\n**Context-Aware Recommendation**: \\nThe concept of context-aware recommendation, as introduced in Adomavicius et al. (2011), provides a general framework where \\u201ccontext\\u201d is broadly defined as any auxiliary information. This framework emphasizes that user preferences for items can vary based on the context in which interactions occur, reflecting a user-centric view of contextual information. \\n\\nBuilding on this foundation, recent works have explored specific instances of context-aware recommendation, such as \\u201cattribute-aware recommendation.\\u201d These approaches often leverage item or user attributes as contextual information to address various goals, including improving user profiling (Adomavicius et al., 2011), predicting missing item attributes (Wu et al., 2020; Chen et al., 2022), enhancing recommendations for cold-start scenarios(Deldjoo et al., 2019), or providing attribute-based explanations for recommendations (Xian et al., 2021). \\n\\nOur work differs significantly in its focus and objectives. We term \\u201cattribute-specific -recommendation,\\u201d which involves generating recommendations explicitly constrained by logical combinations of attributes. Unlike attribute-aware approaches, which aim to improve recommendation quality by incorporating attribute information as auxiliary data, our work directly targets the task of satisfying explicit attribute-based constraints posed by users. We believe the explicit reasoning over attribute-constrained queries is underexplored and represents a meaningful contribution to the field. \\n\\nWe have ensured this distinction clearly in the revised draft of the paper.\\n\\n**Additional Baselines**: We observe that many recent \\\"attribute-aware\\\" recommendation approaches leverage Graph Convolutional Networks to better model attribute-item-user interactions. Also, in response to reviewer `Vvma`'s suggestion, we have implemented `LightGCN`, a graph convolution-based solution for recommendation. Hyperparameter tuning is currently underway, and due to GPU constraints, it may require an additional two days to complete. We will update the draft with the new results and respond to this thread once the tuning is finalized.\\n - Adomavicius et al., (2011) Context-Aware Recommender Systems\\n - Wu et al., (2020) Joint Item Recommendation and Attribute Inference: An Adaptive Graph Convolutional Network Approach\\n - Chen et al., (2022) Multi-view Graph Attention Network for Travel Recommendation\\n - Deldjoo et al., (2019) Movie genome: alleviating new item cold start in movie recommendation\\n - Xian et al., (2021) EX3: Explainable Attribute-aware Item-set Recommendations\"}", "{\"title\": \"Rebuttal by the authors\", \"comment\": \"Thank you for engaging in the rebuttal phase. We deeply value your feedback and aim to address your concerns thoroughly. Please see the following points for our responses.\\n\\n---\\n\\n### **Applicability of LightGCN**\\n\\nThank you for pointing out this concern. We would like to clarify that all models in our study, including LightGCN, are adapted to jointly model user-item and attribute-item interactions. Specifically, for LightGCN, we construct a joint graph where item nodes are connected both to users and item-attributes. This adaptation ensures that LightGCN incorporates attribute information, addressing the primary focus of our study.\\n\\nIn fact, any recommendation system model initially developed for user-item interactions can be extended for our setup by including attribute-item interactions in the training process. This involves sharing the item embeddings (or graph representations, in the case of LightGCN) across both user-item and attribute-item interactions.\\n\\n---\\n\\n### **Applicability of Baselines in Experimental Comparisons**\\n\\nWe carefully designed our study to be comprehensive and relevant to the primary focus of our work, ensuring that it addresses both standard practices and the unique challenges of our proposed task. These points are discussed in detail in the Baseline subsection (**sec 4.4**), background (**sec 2.1**), with a summary provided below:\\n\\n**Relevance of Selected Baselines**:\\nThe baselines chosen\\u2014MF, NeuMF, and LightGCN\\u2014are standard, well-established methods in recommendation systems. They represent varying levels of complexity and address user-item and attribute-item interactions effectively.\\n\\n- **Matrix Factorization (MF)** serves as the simplest and most direct analogue to our approach, making it foundational for comparisons.\\n- **Neural Matrix Factorization (NeuMF)** extends this by incorporating non-linear interactions, offering a more expressive benchmark.\\n- **LightGCN**, despite being primarily designed for user-item graphs, was adapted in our experiments to include attribute-item relationships through a joint graph. This ensures its applicability to our problem setting.\\n\\n**Focus of the Study:**\\nOur primary objective is to address set-theoretic compositionality within recommendation systems, a unique challenge not explicitly tackled by most existing advanced neural methods. While sophisticated approaches exist for matrix completion tasks, their design does not align with the explicit goal of handling compositional semantics. This positions our method as complementary rather than directly comparable to such baselines.\\n\\n**Thoroughness of the Evaluation:**\\nWe believe the three selected baselines cover a diverse range of approaches, from simple to advanced techniques, and provide a comprehensive benchmark for evaluating our method across multiple domains. The inclusion of LightGCN, specifically, strengthens our experimental rigor and demonstrates the adaptability of standard models to incorporate attribute-item interactions.\\n\\nBy focusing on these baselines, our study emphasizes both robustness and relevance without diluting the central contribution of introducing a method tailored to set-theoretic compositionally.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"General Response - Notations\", \"comment\": [\"We appreciate the reviewers' observations regarding the clarity of our explanations of the notations. We acknowledge that some key concepts were not described as thoroughly as they could have been. In this general response, we provide an outline of the methodology of our work and clarify these concepts. These updates will be incorporated into the revised version of the manuscript. We kindly request the reviewers to review the general response before addressing our answers to the specific questions.\", \"---\", \"### **Problem Set up**\", \"The user-item matrix is the collaborative signal $U$\", \"The attribute information is coded in $A$.\", \"We want personalized recommendations with attribute constraints.\", \"User would like what comedy movies? This is the same as completing rows in $U \\\\cap A$.\", \"What romantic-comedy movie that the user would like? This is same as completing rows in $U \\\\cap A_1 \\\\cap A_2$\", \"What comedy movie the user would like that is not romantic? This is same as completing rows in $U \\\\cap A_1 \\\\cap \\\\neg A_2$\", \"---\", \"### **Similarity Function**\", \"Similarity Functions for encoding the collaborative and attribute signals.\", \"**For vector**, we represent the user with vector $\\\\mathbf{u} \\\\in \\\\mathbb{R}^D$ and items with vector $\\\\mathbf{i} \\\\in \\\\mathbb{R}^D$, the similarity is calculated as $\\\\sigma(\\\\mathbf{u}^T.\\\\mathbf{i})$ of for any neural method $\\\\sigma(f(\\\\mathbf{u}, \\\\mathbf{i}))$.\", \"**For box embeddings**, we represent the user $u$, item $i$ and attribute $a$ as hyper-rectangles or cartesian product of intervals: $\\\\textrm{Box}(u)=\\\\prod_{d=1}^D[u_d^{\\\\llcorner}, u_d^{\\\\urcorner}], \\\\textrm{Box}(i)=\\\\prod_{d=1}^D[i_d^{\\\\llcorner}, i_d^{\\\\urcorner}], \\\\textrm{Box}(a)=\\\\prod_{d=1}^D[a_d^{\\\\llcorner}, a_d^{\\\\urcorner}]$.\", \"where, $u_d^{\\\\llcorner} < u_d^{\\\\urcorner},i_d^{\\\\llcorner} < i_d^{\\\\urcorner}, a_d^{\\\\llcorner} < a_d^{\\\\urcorner} $ .\", \"Also the $d$ th dimension of $Box(u)$ is an interval denoted as $\\\\textrm{Box(u)}_d := [u_d^{\\\\llcorner}, u_d^{\\\\urcorner}]$.\", \"Note that, the total volume of a rectangle is a mere multiplication of the length of each dimension. So the rest of the discussion will be on a single dimension $d$, we can multiply the scores for each dimension to calculate the final score.\", \"**Volume of item box** at dimension $d$ is given by the length of the interval.\", \"$$\\\\operatorname{Vol}( \\\\textrm{Box(i)}_d) =\\\\max(i_d^{\\\\urcorner}-i_d^{\\\\llcorner}, 0)$$.\", \"**Volume of intersection between user and item box** is given by, $$\\\\operatorname{VolInt}(\\\\textrm{Box(u)}_d, \\\\textrm{Box(i)}_d) = \\\\max(\\\\min(u_d^{\\\\urcorner}, i_d^{\\\\urcorner}) - \\\\max(u_d^{\\\\llcorner}, i_d^{\\\\llcorner}), 0)$$.\", \"To understand the above equation better, the intersection of two intervals is again another interval, whose `min co-ordinate` $=$ `max` (`min-coordinate` of the two intervals), `max co-ordinate` $=$ min (`max-coordinate` of the two intervals).\", \"The volume of Intersection between the user and the item box decides how much the item is related to the user. We normalize the similarity score by item box volume. The similarity value for the dimension $d$ is given as:\", \"$$F_{Box}(\\\\textrm{Box(u)}_d, \\\\textrm{Box(i)}_d) := \\\\frac{\\\\operatorname{VolInt}(\\\\textrm{Box(u)}_d, \\\\textrm{Box(i)}_d)}{\\\\operatorname{Vol}( \\\\textrm{Box(i)}_d) }\", \"= \\\\frac{\\\\max(\\\\min(u_d^{\\\\urcorner}, i_d^{\\\\urcorner}) - \\\\max(u_d^{\\\\llcorner}, i_d^{\\\\llcorner}), 0)}{\\\\max(i_d^{\\\\urcorner}-i_d^{\\\\llcorner}, 0)}.$$\", \"We multiply this for each dimension to get the similarity score between $u$ and $i$; $\\\\Pi_{d=1}^{D} F_{box}(\\\\textrm{Box}(u)_d, \\\\textrm{Box}(i)_d).$\", \"**Importantly**, the similarity function $F\\\\_{Box}$ achieves the value 1 when the item $Box(i)$ is completely enclosed by $Box(u)$, thereby establishing a set-theoretic geometric interpretation.\", \"We use the same approach to calculate the similarity between the attribute $Box(a)$ and item $Box(i)$.\"]}", "{\"title\": \"General Response - Scope of our work\", \"comment\": \"This general response is intended to address the questions regarding the scope of our work raised by the reviewers, and we will respond to specific questions or concerns in their respective rebuttal sections separately. We kindly request the reviewers to read the general response before reviewing our answers to the specific questions.\\n\\n---\\n\\n\\n**Scope of the Work**\\n\\nThe task of constrained recommendation is both significant and broadly applicable in real-life scenarios, as it enables recommendation systems to meet specific and nuanced user preferences. For instance, on Netflix, a user might search for dark-comedy movies. While Netflix could broadly suggest dark-comedy films, the recommendations need to align with the user's individual interpretation of the genre. For example, one user might consider *Fargo* a dark-comedy movie, while another may not. Similarly, on Spotify, a user might be interested in exploring jazz music, but explicitly condition on excluding smooth jazz from the suggestions. On Yelp, two users might query \\\"Italian place with free parking.\\\" A well-designed system should adapt to their distinct preferences, recommending a casual pizzeria to one user and a fine-dining Italian restaurant to the other, based on their past behavior.\\n\\nAt its core, even a simple attribute query, such as \\u201cItalian restaurants,\\u201d can be seen as a set-theoretic query\\u2014a conjunction between the restaurants aligning with the user's preferences and the set of Italian restaurants. As queries grow more complex, involving logical combinations of attributes, the recommendation task becomes increasingly challenging. Such scenarios are common in real-world applications, underscoring the need for recommendation systems that can interpret and satisfy these explicit set-theoretic constraints while still personalizing results to individual tastes.\\n\\nWhile traditional recommendation systems are well-explored, they generally fall short in addressing queries that involve attributes and their logical combinations. This limitation restricts their ability to solve practical tasks where such constraints are often integral to user needs.\\n\\nOur work introduces a novel framework that bridges this gap, enabling recommendation systems to effectively manage set-theoretic queries across multiple practical domains. By addressing this critical challenge, we aim to extend the functionality of recommendation systems to better meet nuanced user needs.\"}", "{\"title\": \"Training and Inference\", \"comment\": [\"### **Training with the encoded similarity**\", \"We train on binary cross entropy-based loss where the cross entropy is calculated between the observed data and the model\\u2019s similarity output (incorporating both collaborative and attribute data).\", \"**Parameters**: For the MF method, the vector corresponding to the users, items, and attributes are trainable parameters. For the NeuMF method, the neural network is trained along with these parameters. For box embeddings, the vectors corresponding to the lower left ($\\\\llcorner$) and upper right ($\\\\urcorner$) corners are the trainable parameters.\", \"**Hyperparameters**: We perform extensive hyperparameter tuning for the learning rate, batch size, volume and intersection temperature of Gumbelboxes($\\\\nu, \\\\tau$), number of negative samples for the noise constrastive training(. Please refer to Appendix B.2 for detailed hyperparameter space. For each method, we conduct 100 hyperparameter runs, varying the values of the aforementioned hyperparameters.\", \"---\", \"### **Inference**:\", \"During inference, we calculate the volume of the box embedding region corresponding to the set-theoretic query.\", \"Inferring on $U \\\\cap A$: We calculate score as: $$\\\\textrm{score}(u \\\\wedge a, i)_d = \\\\frac{\\\\operatorname{VolInt}\\\\_{GB}(\\\\textrm{Box(u)}\\\\_d, \\\\textrm{Box(a)}\\\\_d,\\\\textrm{Box(i)}\\\\_d)}{\\\\operatorname{Vol}\\\\_{GB}(\\\\textrm{Box(i)}\\\\_d)}.$$\", \"Inferring on $U \\\\cap A_1 \\\\cap A_2$: We calculate score as: $$\\\\textrm{score}(u \\\\wedge a_1 \\\\wedge a_2, i)_d = \\\\frac{\\\\operatorname{VolInt}\\\\_{GB}(\\\\textrm{Box(u)}_d, \\\\textrm{Box}(a_1)\\\\_d, \\\\textrm{Box}(a_2)\\\\_d,\\\\textrm{Box(i)}\\\\_d)}{\\\\operatorname{Vol}\\\\_{GB}(\\\\textrm{Box(i)}_d)}.$$\", \"Inferring on $U \\\\cap A_1 \\\\cap \\\\neg A_2$: We use inclusion-exclusion to calculate scores as:\", \"$$\\\\textrm{score}(u \\\\wedge a_1 \\\\wedge \\\\neg a_2, i)_d = \\\\textrm{score}(u \\\\wedge a_1, i)_d - \\\\textrm{score}(u \\\\wedge a_1 \\\\wedge a_2, i)_d$$\", \"For vectors, there is no prescribed way for calculation for these set operations, so we explore the following options.\", \"1. Filter: Instead of representing the set operation in the embedding space we use post hoc aggregation.\", \"2. Product: Normalized prediction scores can be multiplied to get the joint query score.\", \"3. Geometric: Vector addition and subtraction for intersection and negation respectively.\"]}", "{\"title\": \"From Box to GumbelBox (Abbrev. GB)\", \"comment\": [\"$F_{\\\\textrm{Box}}$ involves `min` and `max` operations, which hinder with gradient, making it hard to learn the parameters of the box embeddings.\", \"Dasgupta et. al. 2020 propose GumbelBox, where the `min` and `max` corners are replaced with Gumbel random variable solving this gradient issue. This modification boils down to using use `logsumexp` which is a smooth approximation of `max` operations. `logsumexp` is defined as $LSE_t(\\\\mathbf x) := t \\\\operatorname{log}(\\\\sum_i e^{x_i/t})$, $t$ is known as the temperature of the GumbelBox.\", \"Under the GumbelBox (abbrev $GB$), we change the notations $F\\\\_{Box}$ to $F\\\\_{GB}$, $\\\\operatorname{VolInt}$ to $\\\\operatorname{VolInt}\\\\_{GB}$, $\\\\operatorname{Vol}$ to $\\\\operatorname{Vol}_{GB}$.\", \"With the new formulation, the per-dimensional similarity score becomes the following:\", \"$$F_\\\\{GB\\\\}(\\\\textrm{Box(u)}_d, \\\\textrm{Box(i)}_d; (\\\\tau, \\\\nu)) := \\\\frac{LSE\\\\_\\\\nu(LSE\\\\_{-\\\\tau}(u_d^{\\\\urcorner}, i_d^{\\\\urcorner}) - LSE\\\\_\\\\tau(u_d^{\\\\llcorner}, i_d^{\\\\llcorner}), 0)}{LSE\\\\_\\\\nu(i_d^{\\\\urcorner} - i_d^{\\\\llcorner}, 0)} =: \\\\frac{\\\\operatorname{VolInt}\\\\_{GB}(\\\\textrm{Box(u)}_d, \\\\textrm{Box(i)}_d; (\\\\tau, \\\\nu))}{\\\\operatorname{Vol}\\\\_{GB}(\\\\textrm{Box(i)}_d; \\\\nu)}$$\", \"$\\\\nu, \\\\tau$ are the volume and intersection temperature of GumbelBox. As $\\\\nu, \\\\tau \\\\rightarrow 0$, the GumbelBox becomes the originally proposed Box. We tune these as hyperparameters.\"]}", "{\"title\": \"Rebuttal by the authors\", \"comment\": \"> Lack of efficiency analysis. What is the advantage of using box embeddings over vector embedding w.r.t. running time?\\n\\nBox embeddings are generally quite fast because the computation of box volumes and their intersection volumes can be parallelized over dimensions. \\n\\nWe report the training time (mm : ss) for a single epoch, where we select different batch sizes with 5 negative samples on the Movielens-1M dataset. Experiments are conducted on the `Nvidia GTX 1080Ti GPU`. \\n\\n| Batch Size | MF | NeuMF | LightGCN | Box |\\n|------------|--------|--------|----------|--------|\\n| 64 | 08:37 | 17:00 | 70:30 | 19:32 |\\n| 128 | 04:32 | 09:46 | 38:40 | 11:40 |\\n| 256 | 02:29 | 04:40 | 20:55 | 05:28 |\\n| 512 | 01:18 | 02:23 | 10:47 | 02:54 |\\n| 1024 | **00:40** | **01:20** | 05:24 | **01:12** |\\n\\nWe observe that the **MF**, being the simplest approach with minimal computational requirements, is consistently the fastest across all batch sizes. At the largest batch size (1024), it achieves the shortest training time of just 00:40. The **Box**-based method exhibits training times comparable to **NeuMF**. However, it is significantly faster than **LightGCN**, which relies on graph convolutional computations. The iterative message-passing operations required by **LightGCN** result in considerably higher training times, particularly at smaller batch sizes (e.g., 70:30 at a batch size of 64). As the batch size increases, the training time for **Box** embeddings becomes almost as efficient as **MF**. For instance, at a batch size of 1024, **Box** achieves a training time of 01:12, compared to 00:40 for **MF**. This demonstrates that the **computational complexity of box embeddings is of the same order as MF**\\n\\nNote that the training times above use GumbleBox embeddings, which involve log-sum-exp calculations. However, this could be improved even further at inference time by replacing these soft min and max approximations with hard operators. If such an optimized approach is desired, then training can accommodate this by regularizing temperature. For deployment\\nin industrial set-up, we could take additional steps with Box Embeddings as outlined in (Mei et al., 2022b).\\n\\n- Mei et al., (2022b) Learning Probabilistic Box Embeddings for Effective and Efficient Ranking.\\n\\n\\n> The paper introduces the notation of query in the context of recommendation systems. What is the difference between query in the context of this paper and query in the context of search engine like Google?\\n\\nThe primary difference between queries in our work and those in search engines like Google lies in **personalization** and the use of **explicit set-theoretic constraints**. While search engines typically handle free-form natural language queries, our work defines queries as **personalized, constraint-based inputs** that are explicitly set-theoretic, allowing \\\"query\\\" and \\\"constraint\\\" to be treated synonymously. For instance, on Netflix, a user might select the `dark-comedy` movie button. While Netflix could broadly suggest dark-comedy films, the recommendations need to align with the user's individual interpretation of the genre; for example, one user might consider *Fargo* a dark-comedy movie, while another may not.\\n\\nWe also request the reviewer to refer to the \\u201cScope of our Work\\u201d section in the \\u201cGeneral Response.\\u201d\\n\\n**Related work we included in our draft:**\\n\\nWhile set-theoretic queries are commonplace in search, popular question-answering (QA) benchmarks often do not include them. We found **QUEST** (Malaviya et al., 2023) to be the most closely related study, introducing a benchmark for entity-seeking queries with implicit set-based semantics. However, QUEST does not focus on explicit constraints or personalization, which are central to our work. On the other hand, we find related studies in the group recommendation systems (Amer-Yahia et al., 2009) where the preferences of multiple users are explicitly aggregated into a coherent recommendation.\\n\\nWe have ensured that the distinctions between our approach and search are clarified in the related work section of the revised manuscript. The new changes will be marked in color blue.\\n\\n- Malaviya et al. (2023) QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations.\\n- Amer-Yahia et al. (2009) Group Recommendation: Semantics and Efficiency.\"}", "{\"title\": \"Rebuttal by the authors\", \"comment\": \"Thank you for recognizing the strengths of our work and for your constructive feedback. While we address your specific questions and concerns in the following rebuttal, we encourage you to review our general response, which highlights common themes regarding the scope of our study and ambiguity in notations.\\n\\n---\\n\\n\\n> Despite showing promising recommendation results, the proposed method seems to be a direct application of box embeddings for personalized item recommendation with set-theoretic queries, which is somehow a limited contribution.\\n\\nWe acknowledge that this work indeed is an application of box embeddings, but it does so in a novel way\\u2014box embeddings have never before been tested empirically for their ability to generalize to arbitrary set-theoretic queries. As such, it was necessary to both (a) create a rigorously-justified set-theoretic task in multiple different domains, and (b) propose methods for leveraging the geometry to calculate scores from the boxes for these set-theoretic queries. Furthermore, understanding why vector-based embeddings fall short in faithfully representing set-theoretic relationships is an essential part of our contribution. Vector methods often struggle with maintaining set-theoretic consistency, a limitation that our approach with box embeddings directly addresses. By spotlighting these challenges and presenting an effective alternative, our work provides actionable insights that future researchers can build upon. We will reframe our contributions in the final version to emphasize these unique advancements and insights.\\n\\n---\\n> The paper relies heavily on the theory of box embeddings but some key concepts were not described sufficiently. For example, in line 191, how to guarantee\\u00a0x\\u231e<x\\u231d\\u00a0for all dimensions\\u00a0d; what is the definition of\\u00a0VolIntGB\\u00a0in Equation 3?; what are the parameters of the model to optimize?\\n\\nThe following clarifications will also be incorporated into the revised version of the manuscript.\\n\\n**Definition of $\\\\operatorname{VolInt_{GB}}$?**\\n\\n$\\\\operatorname{VolInt}$ is the volume of the intersection between two boxes. When the box embedding parameters are defined using Gumbel random variable (abbrev $GB$), we replace $\\\\operatorname{VolInt}$ with $\\\\operatorname{VolInt}_{GB}$. \\n\\nTo elaborate, Dasgupta et. al. 2020 propose GumbelBox, where the `min` and `max` corners are replaced with Gumbel random variable solving this gradient issue. This modification boils down to using use `logsumexp` which is a smooth approximation of `max` operations. `logsumexp` is defined as $LSE_t(\\\\mathbf x) := t \\\\operatorname{log}(\\\\sum_i e^{x_i/t})$, $t$ is known as the temperature of the GumbelBox.\\n\\nUnder the GumbelBox (abbrev $GB$), we change the notations $F\\\\_{Box}, \\\\operatorname{Vol}, \\\\operatorname{VolInt}$ to $F\\\\_{GB}, \\\\operatorname{Vol}\\\\_{GB}, \\\\operatorname{VolInt}\\\\_{GB}$.\\n\\n**Do we need to enforce $x^{\\\\llcorner} < x^{\\\\urcorner}$?**\\n\\nWe do not enforce $x^{\\\\llcorner} < x^{\\\\urcorner}$ when the box embedding is defined as the Gumbel box. \\n\\nIn Gumbel box, the $x^{\\\\llcorner}, x^{\\\\urcorner}$ is replaced with max-Gumbel and min-Gumbel random variables $X^{\\\\llcorner}, X^{\\\\urcorner}$ with mean $x^{\\\\llcorner}, x^{\\\\urcorner}$ respectively. Under these Gumbel distributions, any point in the real line has a positive probability to be inside the box, i.e., $\\\\operatorname{Prob}(X^{\\\\llcorner} <z< X^{\\\\urcorner}) > 0, \\\\forall z \\\\in \\\\mathbb{R}$. \\n\\nThus, when the end points flip, i.e., $x^{\\\\llcorner} > x^{\\\\urcorner}$, the value of $\\\\operatorname{Prob}(X^{\\\\llcorner} <z< X^{\\\\urcorner})$ becomes negligible, although still remains positive. This is also observed in our $\\\\operatorname{Vol}\\\\_{GB}([x^{\\\\llcorner}, x^{\\\\urcorner}]) = LSE_t(x^{\\\\urcorner} - x^{\\\\llcorner}, 0)$ equation; $LSE\\\\_{\\\\tau}(x, 0) > 0$, even if $x^{\\\\urcorner} - x^{\\\\llcorner} < 0$, i.e, $x^{\\\\llcorner} > x^{\\\\urcorner}$. Thus we need not ensure the $x^{\\\\llcorner} < x^{\\\\urcorner}$, when we use $\\\\operatorname{Vol}\\\\_{GB}$ instead of $\\\\operatorname{Vol}$.\\n\\n**Trainable parameters of box embeddings?**\\n\\nThe upper and lower corners $x^{\\\\llcorner}, x^{\\\\urcorner}$ are both trainable parameters for box embeddings. The training signal is provided by the volume of intersection ($\\\\operatorname{VolInt}\\\\_{GB}$) between two boxes. To ensure a fair comparison, we compare the performance of $d$ dimensional box embeddings with $2*d$ dimensional vector-based embeddings. Note that, the temperature $t$ corresponding to the Gumbel Box is treated as hyperparameters.\\n\\n---\\n\\n> Representative baselines such as LightGCN [1] and MultiVAE [2] were not considered. Including more recent and advanced baselines will further ascertain the strength of the proposed method.\\n\\nWe have already implemented the LightGCN and currently running the hyperparameter tuning for all the datasets for faithful final results. Due to `GPU` constraints this process would take two more days. We will update the results in our revised manuscript, and respond back in this thread.\"}", "{\"title\": \"Rebuttal by the authors\", \"comment\": \"Thank you for recognizing the strengths of our work and for your constructive feedback. While we address your specific questions and concerns in the following rebuttal, we encourage you to review our general response, which highlights common themes regarding the scope of our study and ambiguity in notations.\\n\\n---\\n\\n> The approach presented in this paper conflicts with the widely used matrix factorization model, which effectively leverages collaborative filtering signals between users and items. It is unclear how the proposed model addresses these signals.\\n\\nAll the models including our Box-based method train on Collaborative signal (user-vs-item matrix $U$) and Attribute signal (attribute-vs-item matrix $A$). \\n\\nAs you mentioned, the matrix factorization model effectively leverages collaborative filtering signals between users and items. They achieve that by training to increase the dot product between user and item representation when user and items interact and decrease the dot product similarity when they do not (on a negatively sampled version).\\n\\nIn the Box embedding-based method, we train on $U$ in a similar manner. Instead of dot product similarity, we use box containment as the similarity measure; i.e, if a user $u$ has watched an item $i$, then the collaborative signal encourages the item $\\\\mathrm{Box}(i)$ to be inside user $\\\\mathrm{Box}(u)$. We enforce this set containment by optimizing the following:\\n\\n\\nIf the user $u$ has watched an item $i$, i.e., $U\\\\_{u, i} == 1$, then,\\n$$ \\\\frac{ \\\\operatorname{Vol}(\\\\mathrm{Box}(u) \\\\cap \\\\mathrm{Box}(i))}{\\\\operatorname{Vol}( \\\\mathrm{Box}(i))} \\\\rightarrow 1$$\\n\\nIf a user $u$ has not watched an item $i$, i,e, $U\\\\_{u, i} == 0$ (we sample the negatives), then, \\n$$ \\\\frac{ \\\\operatorname{Vol}(\\\\mathrm{Box}(u) \\\\cap \\\\mathrm{Box}(i))}{\\\\operatorname{Vol}( \\\\mathrm{Box}(i))} \\\\rightarrow 0$$\\n\\nHere, $\\\\rightarrow$ means \\\"training to achieve the value\\\". During training, we get values between $(0,1)$ for the above-mentioned expression. We train them to be as close to $1$ or $0$ according to the collaborative signal in $U$. We use binary cross-entropy to achieve this. \\n\\nNote that, by doing this optimization we not only incorporate collaborative signals but also conceptualize the users as a set that contains all the relevant items. \\n\\n----\\n\\n> The experimental baselines are not state-of-the-art; comparing the proposed method to more advanced recommendation models would better demonstrate its advantages.\\n\\nIn response to reviewer `Vvma`'s suggestion, we have implemented `LightGCN`, a graph convolution-based solution for recommendation. Hyperparameter tuning is currently underway, and due to GPU constraints, it may require an additional two days to complete. We will update the draft with the new results and respond to this thread once the tuning is finalized.\\n\\n---\\n\\n> Some equations are difficult to follow due to unclear notation explanations.\\n\\nWe have crafted a general response above titled General Response - Notations, where we have simplified the notations and provided a more detailed explanation for clarity. This response addresses similar questions raised by other reviewers regarding the notations. We kindly request that you review this section, and please let us know if any of the notations remain unclear. We are eager to receive your feedback and will revise our current draft accordingly.\"}", "{\"summary\": \"This paper proposes using box embeddings for matrix completion to improve personalized item recommendation with set-theoretic queries. Box embeddings are employed to bypass the limitation of commonly used vector embeddings, which might fail to recommend items for set-theoretic queries consisting of negotiation and intersection relationships. By representing users, items. and attributes as box embeddings, i.e., hyper rectangles in d-dimensional space, the proposed approach can jointly factorize user-item interaction matrix and item-attribute matrix. Then, users and attributes are regarded as boxes containing multiple items. As such, given a query containing set relationships between attributes, the model retrieves top items having the largest box volume shared with those of users as recommendation list. The whole model is trained to capture containing relationships, i.e., user and attribute boxes contain multiple item boxes. Experimental results on four datasets demonstrate the strong performance of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper studies an interesting yet not well explored problem: personalized item recommendation with set-theoretic queries. The set nature of queries, which consists of set relationships such as negotiation and intersection, leads to an interesting research question: how to capture such relationships to make more accurate recommendations.\\n\\n2. The employment of box embeddings to represent users, items, and attributes is intuitive and sensible to capture set relationships between these three.\\n\\n3. Experimental results on four real-world datasets demonstrate good performance of the proposed box embedding method, outperforming vector embedding approaches.\\n\\n4. The paper is well-structured and easy to follow.\", \"weaknesses\": \"1. Despite showing promising recommendation results, the proposed method seems to be a direct application of box embeddings for personalized item recommendation with set theoretic queries, which is somehow a limited contribution.\\n\\n2. The paper relies heavily on the theory of box embeddings but some key concepts were not described sufficiently. For example, in line 191, how to guarantee $x^\\\\llcorner < x^\\\\urcorner$ for all dimensions $d$; what is the definition of $VolIntGB$ in Equation 3?; what are the parameters of the model to optimize?\\n\\n3. The baselines are somewhat limited. Although authors already mentioned in Section 2.1., representative baselines such as LightGCN [1] and MultiVAE [2] were not considered. Including more recent and advanced baselines will further ascertain the strength of the proposed method.\\n\\n[1] He et al. Lightgcn: Simplifying and powering graph convolution network for recommendation. SIGIR 2020.\\n[2] Liang et al. Variational autoencoders for collaborative filtering. WWW 2018.\\n\\n4. Lack of efficiency analysis. What is the advantage of using box embeddings over vector embedding w.r.t. running time?\\n\\n5. Missing descriptions of some important experimental settings, e.g., how many negative sampled required to train Equations in lines 233 and 240. Moreover, ablative analysis of key hyper-parameters is also not presented. For instance, how the number of negative samples affect the model accuracy? The same questions for $w$ in line 240.\", \"questions\": \"The paper introduces the notation of query in the context of recommendation systems. What is the difference between query in the context of this paper and query in the context of search engine like Google?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by the authors\", \"comment\": \"> (Q1) The authors used nDCG@K for model selection, but evaluated the final performance based on HR@K. Why did the authors use inconsistent metrics for validation and testing? In my opinion, HR@K with 100 negative items is a rather insensitive measure for ranking evaluation. I would like to recommend using Recall@K without negative sampling.\\n> (C2) In line 368, the authors report that they followed the standard sample scoring procedure described in Rendle et al. (2020). However, to my understanding, using this sampling technique is not recommended for a dataset with a small item catalog such as Last-FM, MovieLens-1M, NYC-R. It may just undermine the reliability of the reported results to reduce a small experimental cost.\\n\\n\\nThe main results presented in the paper (Table 4) actually align with your recommendation, so we will clarify a few points.\\n\\n- *Model checkpoint selection:* We only use 100 negative samples for validation (tuning hyperparameters and model selection) because this needs to be fast enough to run during training. This is also why we used nDCG@k, as it is more sensitive for ranking, particularly in the restricted setting of 100 negative samples. \\n- *Final Evaluation with Full Item Set:* Our reported HR@k at set-theoretic evaluation does not use 100 random negative samples, but rather the whole set of items. We report HR@k because this is more standard in recommendation literature, and in this case is synonymous with Recall@k which you suggested (since this is a leave-one-out evaluation). \\n\\nWe have highlighted this part in color blue, in Section 4.3. Please let us know if that alleviates the doubt about the experimental settings. Please let us know your further concerns, we would be very eager to attend to that. \\n\\n---\\n\\n> (Q2) In the current manuscript, the authors do not mention/discuss the result on MovieLens-20M (Table 7 in Appendix B.3). Also, Table 7 is not self-contained. What is the definition of VEC-*? (probably MF or NeuMF?)\\n\\nThank you for pointing this out. This was an honest oversight on our part. This occurred while moving the ML20M results to the appendix to accommodate page restrictions. VEC-* is MF and we have updated Table 7 with the results from NeuMF. The general conclusions drawn from these results remain unchanged. Once we finish experiments with additional baselines, we will update the final results table and conclusions and will revert to this thread with the updated information.\"}", "{\"title\": \"Thanking the Reviewers\", \"comment\": \"We sincerely thank the reviewers for their time and thoughtful, constructive feedback. In this rebuttal, we humbly take this opportunity to address their concerns and questions regarding the related work and experimental setup, aiming to clarify the scope of our work and provide a more detailed explanation of the notations used.\"}", "{\"title\": \"Rebuttal by the authors\", \"comment\": \"> Missing descriptions of some important experimental settings, e.g., how many negative sampled required to train Equations in lines 233 and 240. Moreover, ablative analysis of key hyper-parameters is also not presented. For instance, how the number of negative samples affect the model accuracy? The same questions for in line 240.\\n\\n\\nWe perform extensive hyperparameter tuning for the learning rate, batch size, volume, and intersection temperature of GumbelBoxes ($\\\\nu, \\\\tau$), the number of negative samples for noise contrastive training (Equation in line 233), and the attribute loss constant $w$ (Equation in line 240). Specifically, we vary the number of negative samples in {1, 5, 10, 20} and the attribute loss constant $w$ in the range {0.1, 0.3, 0.5, 0.7, 0.9}. For each method, 100 hyperparameter runs are conducted, with each run drawing a random hyperparameter combination from the grid of all possible combinations. \\n\\nDue to the random sampling nature of the hyperparameter tuning, different runs with varying numbers of negative samples are likely to differ in other hyperparameter settings as well. This makes it challenging to isolate the impact of specific hyperparameters, such as the number of negatives or $w$, under the current scheme. \\n\\nWe have already provided this discussion on hyper-parameters in Section 4.3 and Appendix B.2. We will update the writing around the loss equations, to incorporate a discussion about the negative sampling and attribute constant hyper-parameters.\\n\\nEdit (Sun Nov 24): Updated Appendix B.2 with a Parallel co-ordinate plot with space of hyperparameters vs Box model's performance on ML20M.\"}", "{\"title\": \"Rebuttal by the authors\", \"comment\": \"### **Discussing the Relationship with Existing Studies**\\n\\nThank you for raising this important point. We agree that it is crucial to situate a proposed task within the broader context of existing research.\\n\\nWe believe this concern aligns with the earlier suggestion about expanding discussions on \\\"context-aware recommendation\\\" and \\\"attribute-aware recommendation.\\\" In response to that feedback, we included a detailed discussion in **Section A.2 of the related work**. Specifically, we clarified how our work differs from these approaches. For example:\\n\\n**Context-Aware Recommendation**: These methods aim to personalize recommendations by incorporating contextual information (e.g., time, location). While they enhance prediction quality, they do not address the logical compositionality required to satisfy explicit user constraints.\\n\\n**Attribute-Aware Recommendation:** These approaches use attribute data to improve recommendation quality or enhance explainability. However, attributes are typically treated as auxiliary features rather than central elements in the recommendation process. Consequently, the number of attributes considered is often small\\u2014for instance, only eight movie-related attributes are used in the Movielens 20M dataset. This limits the scope and depth of attribute utilization.\\n\\nIn contrast, our work fundamentally differs by focusing on attribute-constrained recommendation, where the system is explicitly designed to satisfy logical combinations of attributes as constraints specified by users. Moreover, we address a broader and richer set of attributes, which are carefully curated from diverse sources. This enables a more thorough and representative evaluation, ensuring that our approach aligns with the complexities of real-world scenarios.\\n\\nIn summary, our study introduces and formalizes the task of attribute-constrained recommendation, which stands apart from existing frameworks by addressing explicit user constraints based on attributes. This distinction is emphasized throughout the related work section to ensure clarity.\\n\\nWe hope this explanation highlights the careful consideration of related studies in positioning our contribution. Let us know if there are additional aspects you'd like us to elaborate on.\"}", "{\"summary\": \"The authors apply box embedding to attribute-specific query recommendation.\\nThey formulated the task of attribute-specific query recommendation and proposed a recommendation method based on box embedding for the task.\\nThe authors also tried establishing an evaluation protocol for this new task and provided detailed analyses based on generalization spectrum gap and compound error.\\nOn the other hand, the current manuscript severely lacks a discussion of existing recommendation fields (e.g., context-aware recommendation), and the technical novelty is unclear.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The authors tried to use the latest technique in the NLP field (i.e., box embedding) for a recommendation-related task.\\n2. The authors have carefully designed an evaluation protocol for attribute-specific query recommendation based on traditional collaborative filtering.\", \"weaknesses\": \"1. The scope of this study is on a rather limited application, which the authors called \\\"attribute-specific query recommendation\\\".\\n2. The current manuscript lacks related work on \\\"attribute-specific query recommendation\\\". In addition, the authors should discuss the relationship between this study and context-aware recommender systems [a].\\n3. Some experimental settings are not convincing. See the following questions/comments for details.\\n\\n### References\\n[a] Adomavicius, Gediminas, and Alexander Tuzhilin. \\\"Context-aware recommender systems.\\\" Handbook of Recommender Systems. Boston, MA: Springer US, 2010. 217-253.\", \"questions\": \"### Questions\\n - (Q1) The authors used nDCG@K for model selection, but evaluated the final performance based on HR@K. Why did the authors use inconsistent metrics for validation and testing? In my opinion, HR@K with 100 negative items is a rather insensitive measure for ranking evaluation. I would like to recommend using Recall@K without negative sampling.\\n - (Q2) In the current manuscript, the authors do not mention/discuss the result on MovieLens-20M (Table 7 in Appendix B.3). Also, Table 7 is not self-contained. What is the definition of VEC-*? (probably MF or NeuMF?)\\n \\n### Comments.\\n - (C1) To my understanding, \\\"attribute-specific query recommendation\\\" is the task where (1) item attribute values are partially observed and often missing, and (2) in the prediction phase, the positive items are conditioned not only on a user but also on a boolean query. The problem of (1) has been addressed in existing studies (e.g., [b,c]). I am not familiar with (2) in the context of item recommendation, but there might be existing research on it. A discussion/comparison of existing studies on these points would make it easier to understand the novelty of this work.\\n - (C2) In line 368, the authors report that they followed the standard sample scoring procedure described in Rendle et al. (2020). However, to my understanding, using this sampling technique is not recommended for a dataset with a small item catalog such as Last-FM, MovieLens-1M, NYC-R. It may just undermine the reliability of the reported results to reduce a small experimental cost.\\n\\n\\n## References\\n\\n [b] Wu, Le, et al. \\\"Joint item recommendation and attribute inference: An adaptive graph convolutional network approach.\\\" Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 2020.\\n\\n [c] Xian, Yikun, et al. \\\"Ex3: Explainable attribute-aware item-set recommendations.\\\" Proceedings of the 15th ACM Conference on Recommender Systems. 2021.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank authors for your detailed responses providing additional context to the paper.\\n\\nI decide to keep my original score. There still remain some questions need further refined about the proposed method.\\n\\nFirst, while box embedding approach has the potential to solve the set-theoretic query-related problems, the proposed method (mostly) directly applies the prior work Dasgupta et al., 2020. As such, the recommendation improvements come from the existing box embedding method. This is also mentioned by authors in the rebuttal *Vector methods often struggle with maintaining set-theoretic consistency, a limitation that our approach with box embeddings directly addresses*. This means box embedding is the key solution, yet it has been explored for recommendation task. \\n\\nSecond, while the efficiency analysis has been explored, it is based on ML-1M dataset, which is a very small dataset. Testing on larger datasets would further provide insights into the efficiency. Representative baseline such as Multi-VAE is also missing. \\n\\nThird, as mentioned in the related work, a recent study *Learning User Representations with Hypercuboids for\\nRecommender Systems* also applied box embedding for recommendation task. However, there has been no *clear* discussion and comparison to this work, making it difficult to understand the connection between the proposed method and prior study.\"}", "{\"comment\": \"Thank authors for your detailed responses.\\n\\nI appreciate authors' responses that indeed clarify some of my concerns.\\nHowever, the current manuscript appears to be incomplete, as a significant amount of information was added during this discussion phase. Based on the discussions here, I recommend improving the paper further and resubmitting it at a later opportunity.\\n\\nIn my opinion, the current paper attempts to achieve two goals simultaneously: (1) proposing a new task and (2) introducing a non-trivial method that differs significantly from existing research. As a result, the discussions on each aspect seem to lack sufficient depth.\\nIf the goal is to propose a new task, it is crucial to carefully discuss its relationship with existing studies. On the other hand, if the focus is on proposing a new method, experimental comparisons with applicable existing methods should be conducted, or if such methods are not applicable to the proposed task, the reasons should be thoroughly discussed.\\n\\nAs a comment for future improvements of this paper, LightGCN, mentioned by the authors, is not an appropriate baseline. This is because the graph used by LightGCN is based only on user-item interactions and does not incorporate attributes, which are the most important aspect of this study.\\nSo, I will maintain my original score.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"In our work, we recognize a fundamental requirement to natively represent set-theoretic relationships in the embedding space for tasks involving complex concept intersections, unions, and differences. For instance, a user's preference for movies that are both romantic and comedic can be naturally represented using set-theoretic operators. More specifically, we want to learn a map $f: \\\\textrm{attribute} \\\\rightarrow \\\\textrm{embedding}$, such that $f (\\\\textrm{romance} \\\\cap \\\\textrm{comedy}) = f(\\\\textrm{romance} ) \\\\cap f(\\\\textrm{comedy})$. This property is also commonly known as the homeomorphism of Boolean algebra.\\n\\nCurrent vector-based methods struggle with preserving such relationships, as they do not inherently encode the geometric structure necessary for Boolean algebra. This limitation motivates our choice of box embeddings, which are uniquely suited to natively handle set operations\\n\\nWe devise the training objective with box embeddings that ensures set containment, i.e., item embeddings are contained by corresponding attribute concepts and users. This ensures that set-theoretic queries (e.g., \\\"find movies that are both romantic and comedic but not action\\\") can be directly executed in the embedding space, offering a principled approach to modeling complex user preferences. Empirical results across multiple datasets demonstrate the efficacy of our approach validating the utility of box embeddings in modeling complex user preferences.\\n\\n**Related work on Box + Recommendation system:**\\n\\nThe study \\\"Learning User Representations with Hypercuboids for Recommender Systems\\\" primarily focuses on enhancing recommendation efficacy by modeling the diversity of users' interests. It advocates for region-based embeddings, suggesting that such representations better capture diverse preferences. The choice of box embeddings in their work is incidental and not fundamental; the authors could have opted for any parameterized region, such as spheres or Gaussian density curves. Notably, their method employs multiple concentric hypercuboids to approximate arbitrary shapes, prioritizing flexibility over logical consistency.\\n\\nIn contrast, our work is grounded in the explicit need to represent set-theoretic relationships in the embedding space. We propose a framework that adheres to the homeomorphism of Boolean algebra, ensuring that set-theoretic operations (e.g., intersections, unions) are natively supported. Arbitrary region-based embeddings, including the multiple hypercuboid formulation described in the related paper, cannot guarantee these properties. Empirical results in our work demonstrate that preserving Boolean algebraic consistency leads to superior performance in set-theoretic query tasks, underscoring the significance of this distinction.\\n\\n---\\n\\n**RE: Representative baseline such as Multi-VAE is also missing.**\\n\\nLightGCN (LGCN) was prioritized over Multi-VAE as it consistently outperforms Multi-VAE in recommendation benchmarks (e.g., He *et al.*, Table 4). This made it a more relevant choice for evaluating our proposed approach.\\n\\nMoreover, as emphasized in the background section, while many advanced neural architectures exist, they do not address set-theoretic dependencies. The baselines we selected\\u2014MF, NeuMF, and LightGCN\\u2014are standard, widely recognized models that represent varying levels of complexity and effectively capture user-item and attribute-item interactions.\\n\\n---\\n\\n**Efficiency Analysis on ML20M:**\\n\\nTraining time (mm : ss) for a single epoch, where we select different batch sizes with 5 negative samples on the Movielens-20M dataset. Experiments are conducted on the Nvidia GTX 1080Ti GPU.\\n\\n| Batch Size | MF | NeuMF | Box |\\n| --- | --- | --- | --- |\\n| 2048 | 06:37 | 12:44 | 17:47 |\\n| 4096 | 03:45 | 07:32 | 08:47 |\\n| 8192 | 02:04 | 03:38 | 05:03 |\"}", "{\"title\": \"Rebuttal by the authors\", \"comment\": \"> It is suggested that an efficiency comparison between the traditional vector embedding methods and the proposed box embedding method be discussed.\\n\\n\\nBox embeddings are generally quite fast because the computation of box volumes and their intersection volumes can be parallelized over dimensions. \\n\\nWe report the training time (mm : ss) for a single epoch, where we select different batch sizes with 5 negative samples on the Movielens-1M dataset. Experiments are conducted on the `Nvidia GTX 1080Ti GPU`. \\n\\n| Batch Size | MF | NeuMF | LightGCN | Box |\\n|------------|--------|--------|----------|--------|\\n| 64 | 08:37 | 17:00 | 70:30 | 19:32 |\\n| 128 | 04:32 | 09:46 | 38:40 | 11:40 |\\n| 256 | 02:29 | 04:40 | 20:55 | 05:28 |\\n| 512 | 01:18 | 02:23 | 10:47 | 02:54 |\\n| 1024 | **00:40** | **01:20** | 05:24 | **01:12** |\\n\\nWe observe that the **MF**, being the simplest approach with minimal computational requirements, is consistently the fastest across all batch sizes. At the largest batch size (1024), it achieves the shortest training time of just 00:40. The **Box**-based method exhibits training times comparable to **NeuMF**. However, it is significantly faster than **LightGCN**, which relies on graph convolutional computations. The iterative message-passing operations required by **LightGCN** result in considerably higher training times, particularly at smaller batch sizes (e.g., 70:30 at a batch size of 64). As the batch size increases, the training time for **Box** embeddings becomes almost as efficient as **MF**. For instance, at a batch size of 1024, **Box** achieves a training time of 01:12, compared to 00:40 for **MF**. This demonstrates that the **computational complexity of box embeddings is of the same order as MF**\\n\\nNote that the training times above use GumbleBox embeddings, which involve log-sum-exp calculations. However, this could be improved even further at inference time by replacing these soft min and max approximations with hard operators. If such an optimized approach is desired, then training can accommodate this by regularizing temperature. For deployment\\nin industrial set-up, we could take additional steps with Box Embeddings as outlined in (Mei et al., 2022b).\\n\\n- Mei et al., (2022b) Learning Probabilistic Box Embeddings for Effective and Efficient Ranking.\"}", "{\"title\": \"Summary of Updates in the Revised Draft\", \"comment\": [\"We sincerely thank the reviewers for their valuable feedback, which has greatly helped us improve our draft by incorporating stronger baselines to support our claims, enhancing the clarity of notations and experimental setup, adding a detailed discussion on time efficiency, and better positioning our work within the context of related research.\", \"The revisions are highlighted (in **blue**) throughout the paper. Key updates are as follows:\", \"1. **Performance of Light-GCN (Table 4)**:\", \"The main results table (Table 4) has been updated to include the performance metrics for Light-GCN.\", \"Relative performance trends remain consistent with previous findings.\", \"2. **Notation Section (Sections 3.1 and 3.2)**:\", \"Simplified and clarified the notations, making them more intuitive and aligned with the problem setup.\", \"Included missing definitions for terms such as `Volnt`, `Vol`, `VolInt_GB`, and others to enhance readability.\", \"3. **Updated Related Work (Appendix A.2 and A.4)**:\", \"Expanded and refined discussions in the related work sections to address reviewer concerns.\", \"Added sections on context-aware recommendation and set-theoretic queries in search.\", \"4. **Time Efficiency Analysis (Appendix D)**:\", \"Discussed the computational efficiency of box embeddings, highlighting their scalability and parallelizability.\", \"Highlighted that box embeddings offer computational complexity similar to simple vector-based methods, significantly faster than complex methods like LightGCN.\", \"5. **Highlighted Training Details**:\", \"Explicitly detailed training procedures, including the number of negative samples used, model selection protocols, and final scoring evaluation strategy. (\", \"Added a parallel coordinates plot to visualize hyperparameter choices and their impact on performance (Appendix B.2, Figure 2.)\", \"Added missing experiments on NeuMF on ML20m dataset. (Table 7., appendix B.3)\", \"The draft aims to address the specific concerns raised by reviewers and improve overall clarity and completeness.\", \"We humbly request further feedback from the reviewers!\"]}", "{\"summary\": \"This work addresses the task of personalized recommendation using set-theoretic queries. The authors frame this problem as \\\"set-theoretic matrix completion,\\\" highlighting that traditional approaches, such as logistic matrix factorization, do not align with the set-theoretic operations needed during inference.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe study models attribute-specific query recommendation as \\\"set-theoretic matrix completion,\\\" treating attributes and users as item sets.\\n2.\\tThe paper effectively demonstrates the limitations of existing vector embedding models for this specific task.\\n3.\\tExperimental results validate the effectiveness of the proposed model.\", \"weaknesses\": \"1.\\tThe approach presented in this paper conflicts with the widely used matrix factorization model, which effectively leverages collaborative filtering signals between users and items. It is unclear how the proposed model addresses these signals.\\n2.\\tThe experimental baselines are not state-of-the-art; comparing the proposed method to more advanced recommendation models would better demonstrate its advantages.\\n3.\\tSome equations are difficult to follow due to unclear notation explanations.\", \"questions\": \"It is suggested that an efficiency comparison between the traditional vector embedding methods and the proposed box embedding method be discussed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0GzqVqCKns
Probing the Latent Hierarchical Structure of Data via Diffusion Models
[ "Antonio Sclocchi", "Alessandro Favero", "Noam Itzhak Levi", "Matthieu Wyart" ]
High-dimensional data must be highly structured to be learnable. Although the compositional and hierarchical nature of data is often put forward to explain learnability, quantitative measurements establishing these properties are scarce. Likewise, accessing the latent variables underlying such a data structure remains a challenge. In this work, we show that forward-backward experiments in diffusion-based models, where data is noised and then denoised to generate new samples, are a promising tool to probe the latent structure of data. We predict in simple hierarchical models that, in this process, changes in data occur by correlated chunks, with a length scale that diverges at a noise level where a phase transition is known to take place. Remarkably, we confirm this prediction in both text and image datasets using state-of-the-art diffusion models. Our results show how latent variable changes manifest in the data and establish how to measure these effects in real data using diffusion models.
[ "data structure", "hierarchical compositionality", "diffusion models", "statistical physics", "phase transition" ]
Accept (Poster)
https://openreview.net/pdf?id=0GzqVqCKns
https://openreview.net/forum?id=0GzqVqCKns
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zCU8lv6FkO", "wyYxlhrW0P", "uOiFk7zq4J", "tcYvMFpF6C", "oeSQ78YkLK", "nE0VaTRzlb", "mYg8W8aEzk", "jX9qvl739L", "itYlbzQrih", "gz7qAbqjWW", "bI61UTMssu", "ZAyjdxxMkM", "VSyC74cthG", "K30Ex0a57l", "JlO6WlTCCV", "EaQ7JMGPVj", "DqKq4YiiuS", "3rvlDG5F7l", "3mJ9JHGPlP", "3VOJxqRHXM", "3RnKV6ijLu" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734681696006, 1732216604379, 1733088224157, 1730701250048, 1732217635397, 1732501014201, 1730252070360, 1732217127550, 1733168397778, 1730167744768, 1732485878777, 1732662513423, 1732215964781, 1737524140046, 1732645979035, 1732216905702, 1730700783358, 1732643308724, 1732720810459, 1732570778133, 1732643094336 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11698/Area_Chair_QoTv" ], [ "ICLR.cc/2025/Conference/Submission11698/Authors" ], [ "ICLR.cc/2025/Conference/Submission11698/Reviewer_7ZQW" ], [ "ICLR.cc/2025/Conference/Submission11698/Reviewer_7ZQW" ], [ "ICLR.cc/2025/Conference/Submission11698/Authors" ], [ "ICLR.cc/2025/Conference/Submission11698/Reviewer_ohKX" ], [ "ICLR.cc/2025/Conference/Submission11698/Reviewer_s11H" ], [ "ICLR.cc/2025/Conference/Submission11698/Authors" ], [ "ICLR.cc/2025/Conference/Submission11698/Authors" ], [ "ICLR.cc/2025/Conference/Submission11698/Reviewer_WL6E" ], [ "ICLR.cc/2025/Conference/Submission11698/Reviewer_s11H" ], [ "ICLR.cc/2025/Conference/Submission11698/Reviewer_WL6E" ], [ "ICLR.cc/2025/Conference/Submission11698/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11698/Reviewer_s11H" ], [ "ICLR.cc/2025/Conference/Submission11698/Authors" ], [ "ICLR.cc/2025/Conference/Submission11698/Reviewer_ohKX" ], [ "ICLR.cc/2025/Conference/Submission11698/Authors" ], [ "ICLR.cc/2025/Conference/Submission11698/Authors" ], [ "ICLR.cc/2025/Conference/Submission11698/Reviewer_WL6E" ], [ "ICLR.cc/2025/Conference/Submission11698/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper investigates the hierarchical structure of data using forward-backward experiments with diffusion models. The authors propose that changes in data occur in correlated chunks, with a characteristic correlation length that diverges at a critical noise level associated with a phase transition. These predictions are supported by experiments on synthetic hierarchical data (using a Random Hierarchy Model, RHM) and real-world datasets. The results demonstrate that susceptibility peaks observed at specific noise levels correspond to transitions in the latent structure.\", \"additional_comments_on_reviewer_discussion\": \"Concerns were raised about the practical implications of the findings, the realism of the RHM as a model for continuous data, and clarity in linking RHM conclusions to real-world diffusion experiments. The authors addressed these issues via new experiments (e.g., showing phase transitions in ImageNet with a classifier), and manuscript revisions. While some skepticism remains about the general applicability of RHM, the reviewers converged on being in agreement about the paper's acceptance.\"}", "{\"comment\": \"We thank the reviewer for appreciating our work, recognizing that it is well-written and that our theoretical claims are supported by experiments on natural data. Below, we address the reviewer's specific concerns.\\n\\n**1. Practical applications**\\n\\nFundamental science contributes to practical problems by inspiring follow-up research on different time scales. In this specific case, the theoretical and experimental results we provide have quite direct potential for practical applications. The most important one concerns the interpretability of deep networks - a central issue of the field. The hierarchical representation they build is believed to reflect the combinatorial structure of data. Tools to study the latter are scarce. Here, we show that the effect of changing latent variables at different depths can be studied by monitoring the noise level in forward-backward experiments, opening a new avenue to characterize data structure. Finally, the presence of a transition at a certain noise level suggests that it may be especially useful to train diffusion models particularly around this noise value - an idea that practitioners just started to explore [1].\\n\\nAs suggested by the reviewer, we added these two points in our conclusions in the revised pdf document (highlighted in blue).\\n\\n[1] Barcel\\u00f3, R. et al. Avoiding mode collapse in diffusion models fine-tuned with reinforcement learning. arXiv preprint (2024).\\n\\n**2. Context-free vs. context-sensitive models**\\n\\nThe question raised by the reviewer is both interesting and subtle. On the one hand, some apparently non-tree-like graphs of latent variables can be made tree-like, if one allows for complex latent variables that encode more information. On the other hand, this is not always the case, and for example, context-dependence can be present. More broadly speaking, controlling analytically diffusion models for a general graph of latent variables is untractable - in fact, it can take a time exponential in the number of latent variables just to sample from the model.\\n\\nIn practice, it is known that context-free grammars are not expressive enough to capture all phenomena in the syntax of natural languages, requiring *mildly context-sensitive* models [2]. Our observations on WikiText thus support that our conclusion holds beyond context-free grammars, at least for mildly context-dependent structures. In the future, building models that depart gradually from a context-free (tree-like) structure may give a handle to progress on this difficult question.\\n\\nIn the revised manuscript, we have added a discussion in the conclusion section to address the limitations of the tree model and the potential for incorporating context dependencies in future work (highlighted in blue).\\n\\n[2] J\\u00e4ger, G., and James, R. Formal language theory: refining the Chomsky hierarchy. Philosophical Transactions of the Royal Society B: Biological Sciences 367, no. 1598 (2012): 1956-1970.\"}", "{\"comment\": \"I thank the authors for their reply. I appreciate the authors' efforts in providing the modified draft with better clarity. I am still a bit concerned about Figure 5 & Figure 6 in that the trend of correlations from 0.1T to 1.0T is not consistent. While I understand these are the results from different models on different datasets, I am wondering what's the reasons behind this inconsistency. Moreover, showing the trend of these curves with the statistically uncertain range can help us understand the results better.\"}", "{\"summary\": \"The paper examines the hierarchical correlation structures among input tokens using a dynamic correlation function and dynamical susceptibility within a forward-backward experimental framework. These variables reveal how two input tokens respond to perturbations when attempting to recover data from noisy inputs. Analyzing diffusion and language models, the study demonstrates an anticipated correlation aligned with spatial structures.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"(1) The paper introduces novel approaches for analyzing the structure of inputs using pretrained diffusion and language models.\\n\\n(2) The authors offer a thorough analysis and derivation, with experimental results closely aligning with theoretical expectations.\\n\\n(3) Multiple schematic diagrams and data visualizations are included, providing valuable insights into the methods.\", \"weaknesses\": \"(1) The paper\\u2019s presentation could be improved. While there are numerous figures to aid understanding, the main text is somewhat challenging to follow.\\n\\n(2) Why is the \\u03c3 in Equation 3 binary? Wouldn\\u2019t a continuous measurement be more appropriate? For instance, a small difference in pixel values might not alter the semantic structure of the images, but it would be captured by binary measurement.\\n\\n(3) Shouldn\\u2019t the spatial correlation structures be content-dependent? For example, if the bird and the laptop in Figure 5 were moved slightly farther from the camera, would this change affect the result shown in Figure 2?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\", \"details_of_ethics_concerns\": \"N/A\"}", "{\"comment\": \"We thank the reviewer for finding our work clearly written, our analysis nice, and our experiments well-designed. We answer to their concerns below.\\n\\n**1. On the implication of a peak in the susceptibility**\\n\\nIn this work, we demonstrate that very popular models such as Gaussian data or Gaussian random fields fail to exhibit such a peak, whereas the Random Hierarchy Model, text and images do. This work thus emphasizes the limit of modeling data as Gaussian, and points toward the need for richer structures. That said, the reviewer is correct that the observation of a peak in susceptibility does not imply that the data must be exactly tree-like. As discussed in our reply to other reviewers, we believe that mild context dependence will not affect qualitatively our conclusions. Yet, a full classification of which graphical models may capture this phenomenon is a question for the future, as now emphasized in the text (highlighted in blue).\\n\\n**2. Mean-field approximation**\", \"the_reviewer_is_correct\": \"the epsilon process can be seen as a mean-field approximation of multinomial discrete diffusion, where the uncertainty in the reconstruction is uniformly spread onto the sequence of tokens. We then use a mean-field approach, averaging over the possible rule realizations of the RHM, to compute the correlations.\\n\\n**3. Diffusion as spectral autoregression**\\n\\nWe assume the reviewer is referring to the analysis presented in this [blog](https://sander.ai/2024/09/02/spectral-autoregression.html). Spectral autoregression aligns with the behavior observed in our Gaussian random field model, discussed in Section 3 and Appendix B, where higher frequencies are noised first in the diffusion process. Crucially, this perspective, as described in the blog, does not incorporate assumptions about or analyze the conditional probabilities between different scales (or frequencies). By contrast, these correlations among scales are central to the analyses of Marchand et al., 2022 and Kadkhodaie et al., 2023, who explore their structure and implications. Thus, we view these two lines of work as distinct.\\n\\n**Questions**\\n\\n1. *Does the susceptibility divergence tell us anything about how many levels of hierarchy are likely present? Or just that there is at least one level?*\\n\\nIn the Random Hierarchy Model, the phase transition and the associated length scale divergence hold in the limit of large depth. In practice, at finite depth, one observes a smooth crossover with an associated finite susceptibility peak, which increases with increasing depth (fixing the parameters). Conversely, the RHM with just a single level does not exhibit this phenomenology. In the case of real data, the susceptibility peak highlights the presence of structured dependencies consistent with a multi-level hierarchy but does not provide specific information on the depth. \\n\\n2. *The RHM is discrete, and discrete vs continuous diffusion are rather different; can you justify why RHM should be a good model for continuous data/diffusion as well?*\\n\\nWhile images are inherently continuous, they can be described at an abstract, semantic level using discrete hierarchies. As discussed in the related work section, these hierarchical representations have been formalized in *pattern theory* (Stoyan, 1997), where the decomposition is inspired by parsing methods used in linguistics. In this framework, visual scenes are decomposed hierarchically into objects, parts, and primitives, leading to practical algorithms for semantic segmentation and scene understanding. We refer to the paragraph \\\"Hierarchical models of images and text\\\" in the Related Work section for further references.\\n\\n3. *Do the MDLM and ImageNet expts actually confirm that a phase transition occurs? Or do we just observe the susceptibility peak and infer a phase transition by analogy to RHM? In particular, it seems that for ImageNet it might actually be possible to run a classifier to determine whether the class changed.*\\n\\nThe susceptibility peak in real data strongly suggests an underlying hierarchical structure. In the case of images, as suggested by the reviewer, we ran a convolutional classifier (a state-of-the-art ConvNeXt pre-trained on ImageNet) to determine when the class changes (note that Sclocchi et al., 2024, performed equivalent experiments). We reported the results in Figure 12 in the updated manuscript. Clearly, in correspondence to the susceptibility peak, the class of the generated images displays a sharp transition.\"}", "{\"comment\": \"Thank you for your feedback! I will maintain my current rating.\"}", "{\"summary\": \"In this paper, the authors examine the hierarchical structure in high-dimensional data by conducting forward-backward experiments within diffusion-based models. They employ a Random Hierarchy Model (RHM) for the data where the tokens of data are generated from a tree structure of latent, they also use Belief Propagation (BP) as the score function to denoise samples.\\n\\nThe authors focus on the phase transition of the average belief $p_L$ of the RHM's root node by analyzing an iterative mapping (Equation 7) and identifying a critical noise level $\\\\epsilon^*$ at which the transition occurs. Based on that, they also compute the minimum layers $\\\\tilde {l}$ needed for the transition, beyond which $p_l$ would collapse to trivial fixed points $\\\\{1/v,1\\\\}$, indicating either a complete reconstruction or randomization of upper latent variables. At this specific noise level, BP can modify the deepest latent layer $\\\\tilde {l}$, yielding the maximum correlation length (i.e. big \\\"chunks\\\" of data token), which is the distance over which token changes remain correlated.\\n\\nTo characterize this effect, the authors introduce **dynamical susceptibility** which exhibits a first increase then decrease curve as expected. They further demonstrate that the dynamical susceptibility curve has the same trend for forward-backward experiments with diffusion models and synthetic RHM experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors aim to capture hidden hierarchical structures within discrete data using the RHM model, with their RHM+BP framework supporting both discrete and continuous diffusion processes.\\n\\n2. By applying BP for denoising, the authors rigorously analyze phase transitions in the denoising results and identify the critical noise level needed to induce a change in the data class (or low-level feature).\", \"weaknesses\": \"1. The paper is somewhat disorganized and hard to follow, as definitions, derivations, and experimental results are heavily interwoven. To improve clarity, consider using theorems or structured definitions to better organize the content (e.g. by moving some derivations, such as Equations 8 and 9 to appendix and summarizing them as a main theorem).\\n\\n2. In practice, people use real data + score-based denoising; however, the authors use RHM data + BP denoising instead. This discrepancy is insufficiently justified, making the claim that real-world data shares the same hierarchy as RHM unconvincing. While the authors show a similar phase transition phenomenon between the RHM case and real-world diffusion case, they do not rigorously establish a connection between them. Verification by testing real-world diffusion on RHM data may strengthen this claim.\\n\\n3. The results are somewhat vague and lack practical insights, as it appears that neither the RHM setup nor the forward-backward experiment has direct practical applications. Although the authors mention interpreting the \\\"chunks\\\" that emerge during the forward-backward experiments, they do not provide further discussion or related work on that.\", \"questions\": \"1. From the analysis, BP denoising appears to be a one-step method that directly samples $\\\\hat{x_0}$ from the noisy observation $x_t$, differing from typical diffusion denoising that iteratively samples $x_{t-1}$ from $x_t$ throughout the process. Does this discrepancy exist, or are the authors also using a denoising schedule similar to real diffusion models?\\n\\n2. Can we interpret the maximal correlated length achieved at an intermediate noise level (time step) as the model generating class information or lower-level features? If so, this would contrast with existing observations that diffusion processes follow a coarse-to-fine generation pattern (e.g., https://arxiv.org/abs/2303.02490), where lower-level features are generated at the beginning, not in the middle.\\n\\n3. Figure 4(a) is somewhat unclear. Combined with (c), it seems the authors are suggesting that the largest correlated changing chunk appears at a masking fraction $t/T \\\\in [0.5, 0.7]$. However, this is not immediately evident from (a) alone.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Questions**\\n\\nQ1. See point (2) above.\\n\\nQ2. The reviewer is correct in interpreting the critical noise (or time) level as the point in the diffusion process that acts on class-level information. Our findings are in line with those of Sclocchi et al. (2024), who demonstrated that during forward-backward experiments, small amounts of noise only alter low-level features. Once the transition point is reached, the class becomes likely to change. Remarkably, even when the class changes, some low-level features from the original data are preserved.\\n\\nAlthough this might appear to contrast with the coarse-to-fine generation pattern referenced by the reviewer, these two pictures concern two different levels of description. In the coarse-to-fine view, higher-frequency components are affected early in the diffusion process, while low-frequency modes persist for longer. This is precisely the pattern we observe in simpler models like the Gaussian random field model, which we discussed in Section 3.3 and Appendix B. This viewpoint may be an appropriate starting point to describe the effect of diffusion on images at a geometric or power spectrum level.\\n\\nBy contrast, our empirical results test our predictions at a semantic level (remember we consider a CLIP encoding). For images, this means that features can correspond to parts of objects - such as the eyes, mouth, and nose of a face - rather than simple geometric or frequency components. The RHM appears to be a useful model for describing these high-level aspects of images.\\n\\nWe have added a citation to this work in the related work section, together with the discussion above (highlighted in blue).\\n\\n\\nQ3. The correlation length measures the distance over which the fluctuations of changes are correlated. Figures 4(b) and 4(c) show that the tokens are changed together at a maximal distance when the masking fraction is between 0.5 and 0.7. As the noise level increases further, the correlation length decreases, indicating that changes become less correlated. It\\u2019s important to note that measuring fluctuations and establishing correlation length requires statistical analysis across many instances. Therefore, the pattern we describe is not discernible from a single example in Figure 4(a) alone, which is meant to illustrate the process. We hope this clarifies the reviewer's concern.\"}", "{\"comment\": \"We thank the reviewer for their response and feedback. We believe the reviewer is referring to Figures 4 and 6, which present the data for text and images, rather than Figures 5 and 6. These two datasets inherently have very different structures. As a result, it is expected that their correlation functions at different times will not be identical. Similarly, in the RHM, varying the parameters of the model or the diffusion process also leads to changes in the shape of the correlation functions and the location of the susceptibility peak.\\n\\nHowever, the presence of a peak in correlation length and susceptibility remains consistent across both modalities. Our work relates the presence of this peak, and not its precise location, to a latent hierarchical structure.\\n\\nAs requested by the reviewer, in the revised manuscript we will add the statistical error bars, whose size is relatively small and do not change our conclusions.\"}", "{\"summary\": \"This paper suggests that forward-backward diffusion experiments can be helpful in uncovering hierachical structure in data. They first study a synthetic Random Hierarchical Model and show that a peak of the dynamical susceptibility (related to correlations between blocks of tokens) occurs at a noise level where a phase transition is known to occur in the RHM (i.e. the latent class at the root changes). They then show peaks in the susceptibility in text and image experiments.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is clearly written and the RHM model is nicely studied. The text and image experiments are well-designed (although I still have some questions, below, about how well the conclusions from RHM transfer to real data). I appreciate the application of ideas from Physics to ML problems.\", \"weaknesses\": \"Please see Questions.\", \"questions\": \"L57: Does such a divergence definitely indicate a hierarchical structure or are there other ways/reasons divergence could occur? i.e. is this divergence a \\u201cproof\\u201d of (or very strong evidence for) hierarchy?\", \"l210\": \"Clarification: so, the epsilon-process is itself a mean-field approximation of the discrete diffusion process, but then you use another mean-field on top of that to compute the correlation?\", \"l356\": \"Can you elaborate on why a susceptibility peak is a \\u2018smoking gun\\u2019 for hierarchy? Just because one nonhierarchical example doesn\\u2019t have a susceptibility peak doesn\\u2019t mean there might not be others that do?\", \"l511\": \"How (or does) this relate to the diffusion-as-spectral-autoregression point of view? Also there is a typo, 'trough'.\", \"general\": \"Does the susceptibility divergence tell us anything about how many levels of hierarchy are likely present? Or just that there is at least one level?\\n\\nThe RHM is discrete, and discrete vs continuous diffusion are rather different; can you justify why RHM should be a good model for continuous data/diffusion as well?\\n\\nDo the MDLM and ImageNet expts actually confirm that a phase transition occurs? Or do we just observe the susceptibility peak and infer a phase transition by analogy to RHM? In particular, it seems that for ImageNet it might actually be possible to run a classifier to determine whether the class changed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the great rebuttal, I have raised my rating to 6. I appreciate the authors' efforts to enhance the clarity and rigor of their work (Point 1), and I also found their clarifications on the implications, including the universality of their findings (Point 3) and the interpretation of the semantic aspect (Q2) very insightful.\\n\\nHowever, I remain skeptical about using BP+RHM to mimic real-world diffusion (Point 2 and Q1). While I agree with the authors that if the data follow RHM, BP corresponds to the optimal posterior estimation $E[\\\\hat{x}_0|x_t]$, a major issue is that the $\\\\epsilon$-process is not equivalent to the forward-backward experiment. In the forward-backward experiment, simulating the reverse SDE typically requires multiple posterior estimations. Relying on a single posterior estimation of $\\\\hat{x}_0$\\u200b would yield significantly different results\\u2014often resembling a blurred average of the training data. Since the $\\\\epsilon$-process analyzes $\\\\hat x_0$\\u200b, its conclusions differ from the subsequent experiments in Section 4.\"}", "{\"comment\": \"Thanks for clarifying, but my concern was more generally about whether RHM is a realistic model. I will keep my score. I can reduce my confidence if the authors would like.\"}", "{\"comment\": \"We thank the reviewer for recognizing the novelty of our contribution and acknowledging the rigor of our analysis, along with the solid experimental validation provided. Below, we address the reviewer's specific concerns.\\n\\n**1. Presentation**\\n\\nWe have made several improvements to the presentation and structure of our paper in response to feedback from multiple reviewers, including:\\n\\n- We have added a new paragraph at the end of the introduction section that explicitly outlines the structure of the paper, providing a clearer roadmap for the reader.\\n- We have isolated the main definitions from the core text, presenting them in a more structured and standalone manner to improve readability.\\n- We have moved the detailed computations of the dynamical correlation length analysis (originally in Subsection 3.1.1) to the Appendix. In the main text, we have provided a more concise and accessible description of the results.\\n\\nWe welcome any further suggestions for enhancing the clarity of our work and are open to elaborating on specific points that may require additional explanation.\\n\\n**2. Binary vs. continuous spin variables**\\n\\nIn Equation 3 (Equation 2 of the updated manuscript), our choice to use binary spin variables $\\\\sigma_i$ stems from the discrete nature of the model under consideration in that section, where features are equidistant and represented categorically. However, we agree that a continuous measure is indeed more suitable for continuous data types, such as images. Specifically, for image data, we account for the continuity of variations by measuring the L2 distance between patch embeddings before and after the forward-backward procedure (see Equation 7). To clarify this point, we have added a sentence when introducing the discrete spin variables, noting that we will extend them to accommodate continuous data in the subsequent sections (highlighted in blue).\\n\\n**3. Content dependence of spatial correlation structure**\\n\\nWe acknowledge that spatial correlations between changes in a single image are influenced by the content. Specifically, variations in camera distance or object placement can alter the spatial correlation structure of an individual image. Nevertheless, our analysis in Figure 6 reports average spatial correlations across a large set of images. These average correlations are robust to individual content variations as long as the data distribution of the initial images remains consistent. In other words, while the spatial correlation for any individual image is content-dependent, the trends in Figure 6 represent an aggregate measure, capturing the statistical properties of the dataset.\\n\\nWe hope our responses provide sufficient clarification and would be happy to elaborate further or address additional concerns as needed.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thanks for the clarifications and additional experiments. I now understand that the authors are analyzing the entire progressive sampling process, which resembles the forward-backward experiment. Moreover, they use mean-field theory for simplification, without which, understanding the sampling process could be challenging for unstructured data distributions.\\n\\nI think the paper is accomplished, but I am still not very confident about how realistic the RHM is. I can increase my ratings and lower my confidence score if the authors would like.\"}", "{\"comment\": \"We thank the reviewer for their feedback and address their specific concerns below.\\n\\n**1. Presentation**\\n\\nWe appreciate the reviewer's feedback on improving the readability of our paper. Our work follows a structured approach: we first present our theoretical framework using the synthetic hierarchical model of data, the RHM. Then, we validate these theoretical results through numerical experiments. Finally, we test our predictions in real-world scenarios involving state-of-the-art diffusion models for both images and text. In response to your suggestion, we have taken the following steps to enhance the clarity of our presentation:\\n\\n- We have explicitly highlighted the structure of the paper by adding a paragraph after the introduction.\\n- We have separated and presented the main definitions in a more structured, standalone format, making them easier to follow.\\n- We have moved the detailed computations of the dynamical correlation length analysis (originally in Subsection 3.1.1) to the Appendix. In the main text, we have provided a more concise and accessible description of the results.\\n\\nWe believe these revisions (highlighted in blue in the updated pdf) have significantly improved the clarity and readability of our work. We welcome any further feedback or suggestions on how we might continue to refine our presentation.\\n\\n**2. BP vs. neural network for denoising**\\n\\nScore-based generative models generate samples by reversing a forward diffusion process that progressively adds noise to data. In practice, these models employ a neural network to approximate the score function, i.e., the gradient of the log density of the data. The score enables running the backward dynamics. The score function is implicitly related to the conditional expectation $\\\\mathbb{E}[x_0|x_t]$, where $x_t$ is a noisy observation at time $t$. Given this conditional expectation, new samples can be generated by running a time-discretized backward dynamics.\\n\\nFor tree-like models such as the RHM, the conditional expectation $\\\\mathbb{E}[x_0|x_t]$ can be computed exactly via message-passing algorithms like Belief Propagation (BP). As noted in L157, this corresponds to having access to a neural network that has learned the exact population score, which can be used to run the backward dynamics. However, BP also allows for the sampling of new data directly from the exact posterior $p(x_0|x_t)$ without running the reverse process. In the limit of an infinite number of diffusion steps, the two sampling processes are equivalent. Thus, our predictions with the RHM are directly comparable to real-world data.\\n\\nWe have incorporated these clarifications into the updated manuscript (highlighted in blue).\\n\\n**3. Practical applications**\\n\\nSee our reply to reviewer [ohKX](https://openreview.net/forum?id=0GzqVqCKns&noteId=wyYxlhrW0P). We want to clarify further that the primary goal of our research is to provide a fundamental analysis of the hierarchical structure of data belonging to different modalities and how diffusion models capture and utilize this structure. Our findings contribute to the growing body of work on the interpretability of latent representations in neural networks, offering a fresh perspective by examining correlated changes observed in forward-backward experiments with diffusion models. On the one side, our work supports the hypothesis that hierarchical latent structures are universal properties underlying natural data as diverse as images and language. On the other side, it opens new possibilities for future work on the interpretation of these correlated changes, e.g., in terms of the syntactic structure of a language. These further works will be data-specific, whereas the present work emphasizes the universal connection between theory and observations in very diverse settings.\\n\\nWe clarified the text to make that point (highlighted in blue).\"}", "{\"summary\": \"This paper aims to understand diffusion models through a hierarchical latent variable model. Through this framework, this paper demonstrates the connection between the noise level and the hierarchical levels, as evidenced by a transition phrase. This paper builds on the tools from physics and illustrate their theoretical model with empirical results on practical models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The hierarchical perspective provides novel insights into the diffusion model's mechanism and the application of physics is also refreshing. I feel that the community can benefit from these insights, which may give rise to empirical advancements.\\n2. The paper is well written and clearly communicates the main ideas.\\n3. The experiments on natural data (image/text) support the theoretical claims.\", \"weaknesses\": \"1. It'd be great to see attempts at utilizing the theoretical/empirical observations to advance practical model design. Some discussions along this direction would also be appreciated.\\n2. The tree model seems overly simplified for real-world data like images and languages. For example, one would imagine two high-level variables could become co-parents for some low-level variables, thus breaking the tree structure. I would appreciate a discussion on this limitation and the applicability of the theoretical framework to more general latent models.\", \"questions\": \"Please refer to the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for appreciating our additions and clarifications.\\n\\nWe point out that the concern of reviewer s11H about our sampling procedure is due to a misunderstanding. The two procedures are **exactly equivalent** (see the [new answer to reviewer s11H](https://openreview.net/forum?id=0GzqVqCKns&noteId=3RnKV6ijLu) and the added discussion and experiments in Appendix A.1.3). We therefore kindly invite the reviewer to reconsider their assessment of our work.\"}", "{\"comment\": \"We thank the reviewer for their appreciation and for proposing to raise their score. We believe it would be warranted since although the RHM is an idealized model of data, it makes non-trivial predictions confirmed both in image and text datasets.\"}", "{\"comment\": \"Thank you for the helpful clarifications! I particularly appreciate the addition of Figure 12 showing that a phase transition actually occurs for ImageNet and its location coincides with the susceptibility peak. I also liked the new section on \\\"Hierarchical models of images and text\\\". I have increased my Soundness rating to 4 but will stick with my original score of 6 (since I share some of reviewer s11H's concerns).\"}", "{\"comment\": \"We thank the reviewer for reconsidering their score. Regarding the concern raised in the last comment, it seems there is a misunderstanding. As highlighted in Section 2.2.1 and in our earlier response, BP does not only provide access to the **posterior mean** $\\\\mathbb{E}[\\\\hat{x}_0|x_t]$ but also to the full **posterior distribution** $p(\\\\hat{x}_0|x_t)$.\\n\\nIn our approach, we do not sample data using the posterior mean. Instead, **we sample from the posterior distribution** using the following procedure. We begin by sampling the root node using the marginal probability computed by BP. Then, we condition on the sampled symbol, update the beliefs, and sample one latent variable at the next layer $L-1$. This procedure is repeated node by node, descending through the tree until we generate a complete configuration at the bottom layer (cf. Mezard and Montanari, 2009). \\n\\nAs mentioned in our previous answer, this sampling approach is **exactly equivalent** to running the reverse process using the posterior means in the limit of an infinite number of diffusion steps. Thus, our results on the RHM do not differ from the subsequent experiments in Section 4. To further clarify this point, we have added a detailed explanation of our sampling procedure to Appendix A.1.3. Moreover, we have added **a new experiment** for masking diffusion of the RHM, comparing the correlation functions and the dynamical susceptibility obtained with the two sampling methods, i.e., sampling from the posterior computed with BP and running the backward diffusion dynamics using the score function. Figure 8 of the updated manuscript shows that they yield identical results. \\n\\nWe thank the reviewer again, and we hope that our answer and the new data resolve their skepticism. \\n\\nMezard, M. and Montanari, A. (2009). Information, physics, and computation. Oxford University Press.\"}" ] }
0GC81gpjOo
Cognitive Insights and Stable Coalition Matching for Fostering Multi-Agent Cooperation
[ "Jiaqi Shao", "Tianjun Yuan", "Tao Lin", "Bing Luo" ]
Cognitive abilities, such as Theory of Mind (ToM), play a vital role in facilitating cooperation in human social interactions. However, Large Language Model (LLM) agents with higher ToM abilities do not necessarily exhibit better cooperative behavior compared to those with lower ToM abilities, highlighting the complexity of translating human cognitive processes to artificial intelligent agents. To address this challenge, we propose a novel matching coalition mechanism that leverages the strengths of agents with different ToM levels by explicitly considering belief alignment and specialized abilities when forming coalitions. Our proposed stable coalition formation algorithm seeks to find the team that maximizes the potential for cooperative trends and ensures long-term viability. By incorporating cognitive insights into the design of multi-agent systems, our work demonstrates the potential of leveraging ToM to create more sophisticated and human-like coordination strategies that foster cooperation and improve overall system performance.
[ "Multi-Agent Cooperation", "LLM", "Theory of Mind" ]
Reject
https://openreview.net/pdf?id=0GC81gpjOo
https://openreview.net/forum?id=0GC81gpjOo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xgPGc1Av68", "wklfW2vl1y", "vofGg8BKpa", "uDOwU2Is7U", "u4wICdxt98", "sYM0tskNye", "sFG7ydO0vq", "r4CbgGYrX9", "q6dxd5IVLe", "oaM8RGo7hf", "mwQhkWBpR5", "joaCvgaV6C", "jRLLDtFiCU", "i1PrgYN0NE", "hzcIw4DZvg", "h0R7utaKo0", "fx8p9aCilO", "exe73m2jwx", "d3lmuxdAHb", "VWqU4OVgMn", "Q40YB0Yo6o", "LRMXGELhhf", "KIZ78VNS5v", "Jm7Bq1AxvY", "JH3zWUO9Tq", "E0AfNS95Vh", "CaMgWHg7kR", "BNDmXgaq72", "B1p7UFGasa", "AwfeCCSXPG", "6thJJuZwPb", "4oGi53hpv6", "2p3fB2jBGM", "12KN52kJPp", "0hypIpThnc" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732190789697, 1732472769939, 1732190033474, 1730843390933, 1732526049205, 1732016502272, 1732347279292, 1732190948318, 1732472241187, 1732707961062, 1730676867364, 1732705426912, 1732526767133, 1733040146252, 1732021788579, 1732191958234, 1732525881640, 1732021356556, 1730565153575, 1732022025990, 1732352467313, 1732666526398, 1737523786442, 1732190182157, 1730700987638, 1732187781035, 1732471453757, 1734702642547, 1732188088681, 1732673265014, 1733040236563, 1732068859378, 1732015379155, 1732673501051, 1732191334745 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Reviewer_zFGd" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Reviewer_BmBs" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Reviewer_P68Z" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Reviewer_rCxA" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Reviewer_P68Z" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Reviewer_BmBs" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Reviewer_rCxA" ], [ "ICLR.cc/2025/Conference/Submission6717/Area_Chair_TAdr" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Reviewer_P68Z" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ], [ "ICLR.cc/2025/Conference/Submission6717/Authors" ] ], "structured_content_str": [ "{\"title\": \"Re. B-W1. Strength of Motivating Claim\", \"comment\": \"> B-W1. Relatedly, looking through the actual LLM outputs included in the appendices, the level-2 ToM responses seem quite strange. They are worded as if they are predicated on the other agents\\u00a0*actually observing*\\u00a0actions in advance, rather than\\u00a0*anticipating*\\u00a0instructions. I am not really sure what is going on here, but reading through it was not at all surprising that the higher ToM agents performed less well on the task, as they appeared to be being mis-instructed.\\n> \\n\\n**R.B-W1.**\\nWe sincerely thank the reviewer for these insightful technical points about ToM levels and agent interactions. We'll address each point:\\n\\n- **ToM implementation** works recursively over multiple interactions. When we say \\\"observing\\\" in level-2 ToM responses, this refers to observing outcomes from **previous interaction** **rounds** to inform **predictions about future actions of other agents**. This creates a recursive reasoning chain:\\n - Round 1: Initial beliefs\\n - Round 2: Update beliefs based on Round 1 observations\\n - Round 3: Further updates incorporating both previous rounds\\n And so on...\\n- To make the exact format of beliefs and nested reasoning processes explicit, we expand our formulation to show the recursive structure for an LLM agent $i$ at cooperation round $R$ (This revision is updated in our manuscript page 4 & 5).\\nWe revise its $k$-level ToM function (Eq. 1) as:\\n \\n $\\\\text{ToM}\\\\_i^k(o\\\\_i^{1:R}, \\\\hat{a}\\\\_{-i}^{1:R-1}, \\\\\\\\{b^{k-1}\\\\_{i, R}(a\\\\_m^R)\\\\\\\\}\\\\_{m \\\\neq i}) := b\\\\_{i, R}^k$\", \"where\": [\"$o_i^{1:R}$ represents agent $i$'s observation history up to round $R$, including current task state, self-actions, and collaborative teammates.\", \"$\\\\hat{a}_{-i}^{1:R-1}$ represents other agents' action history up to round $R-1$\", \"$\\\\\\\\{b^{k-1}\\\\_{i, R}(a\\\\_m^R)\\\\\\\\}\\\\_\\\\{m \\\\neq i\\\\}$ captures agent $i$'s prediction of agent $m$'s action at round $R$ based on $(k-1)$-level ToM reasoning:\", \"$b^{k-1}\\\\_{i, R}(a\\\\_m^R) = p(a\\\\_m^R | \\\\text{ToM}\\\\_i^{k-1}(o\\\\_i^{1:R}, \\\\hat{a}\\\\_{-i}^{1:R-1}, \\\\\\\\{b^{k-2}\\\\_{i, R}(a\\\\_l^R)\\\\\\\\}\\\\_{l \\\\neq i}))$\", \"Moreover, we want to emphasize that our core contribution is the coalition formation mechanism as a plug-and-play approach for improving multi-agent cooperation, rather than establishing fundamental properties of ToM levels.\"]}", "{\"comment\": \"We sincerely thank the reviewer's thoughtful comments and suggestions. As we approach the end of the discussion period, we wanted to respectfully follow up regarding your concerns.\\n\\nWe have provided detailed responses to address these concerns, including explanation of future scalability approaches and the additional analysis of ToM effectiveness across different environments. \\n\\nPlease let us know if you have any remaining concerns. Thanks again for your thoughtful and constructive feedback making our paper clearer.\"}", "{\"title\": \"Re. A. Matching Algorithm Confusions\", \"comment\": \"> A-W1. The authors start by talking about the set of an agent $i$'s partners $\\\\mu(i)$, but based on their matching algorithm they seem to implicitly assume that $\\\\mu(i)$ is always a singleton (see Equation 2). Otherwise, matchings are stable only when player $i$ would not prefer to be partnered with $j$ over its current\\u00a0*set*\\u00a0of partners $\\\\mu(i)$. But what would that mean? Are preferences between sets of agents instead of single agents? If so, the relevant comparison would presumably be that agent player $i$ would not prefer to be partnered with any\\u00a0*set*\\u00a0of other agents over its current\\u00a0*set*\\u00a0of partners $\\\\mu(i)$.\\n\\n**R.A-W1**. Matching Implementation:\\n We sincerely think the reviewer\\u2019s thoughtful feedback on the formulation. In our original manuscript, we focus on a one-to-many matching structure, specifically motivated by the Project Manager (PM) and Engineers scenario: **PM Role (One Side,** Single PM with ToM capabilities) and **Engineer Role (Many Side).** In our implementation, we enforce a minimum coalition size to ensure effective cooperation. We maintain $|\\\\mu(i)| \\\\geq k$, where k is a predefined #min_coalition_size -1 (typically #min_coalition_size=$\\\\lceil N/2 \\\\rceil$ in our setting, where N is the total number of agents in our experiments).\\n \\n**Generalization to Other Settings**. We acknowledge that our implementation also considers other scenarios (like debates where **Equal-role agents are forming teams**). To accommodate **both hierarchical and peer-based scenarios**, we propose a generalized formulation adapted to our implementation:\\n \\n- **Preference Structure:** For agent $i$, preferences over coalitions are defined by belief-action alignment scores:\\n $B\\\\_i(S) = \\\\frac{1}{|S|} \\\\sum\\\\_{j \\\\in S} \\\\phi(b^k\\\\_i(a\\\\_j) - \\\\hat{a}\\\\_j)$\\n For two potential coalitions $S$$_1, S_2$:\\n $S\\\\_1 \\\\succ\\\\_i S\\\\_2 \\\\Leftrightarrow B\\\\_i(S\\\\_1) < B\\\\_i(S\\\\_2)$\\n- **Stability**: A matching $\\\\mu$ is stable if there exists no blocking coalition $C \\\\subseteq N$ where:\\n (1). $|C| > k$ (minimum size requirement)\\n (2). $\\\\forall i \\\\in C: C \\\\succ\\\\_i \\\\mu(i)$ (coalition preferred by all members)\\n- **Generalization**: we discuss the specialized scores in PM and Eng setting in Appendix C.1, where specialized ability scores primarily influence the PM, since effective leadership and coordination capabilities are crucial (as also evidenced by extensive evaluation in Appendix F.3, which detailed in our **response R.B-W2.**)\\n \\nWe updated our manuscript according to the reviewer's constructive feedback, detailed on pages 5 & 6, highlighted in purple color. \\n\\n> A-W2. The preference order described in equation 4 are based on agents having different skill levels $\\\\alpha_i$ on different tasks, but where do these skill levels come from? More importantly, if the point is to match agents with complementary skills, why does the matching algorithm only compare agents' skills on a\\u00a0*single*\\u00a0task?\\n> \\nWe will clarify the reviewer's concert point by point:\\n\\n1. Source of Skill Levels ($\\\\alpha_j$):\\nThe skill levels come from agents' self-assessment based on their assigned roles and capabilities. For example, in the programming task, an Engineer with testing expertise = 0.9 for testing tasks. \\n2. Our current work focuses on one **task-specific requirement** in each (cooperation) round. For instance: If the **current agent i needs testing expertise**:\\n - $\\\\alpha_j$_testing (0.9) > $\\u03b1\\u2098$_testing (0.5)\\n - We will extend our work by incorporating multi-task skill vectors in the future.\"}", "{\"summary\": \"This paper focuses on the problem of cooperation in multi-agent systems when the agents are LLM agents. In particular, this work focuses on how theory of mind interacts with cooperation and introduce a mechanism for designing diverse ToM groups amongst these agents that optimise the overall cooperative performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The related work is clear and concise.\", \"The paper is well motivated\", \"The empirical results for HumanEval demonstrating the effectiveness of the matching mechanism to match agents to those that they are able to accurately predict beliefs about is promising. This is alongside promising improvements in terms of Pass@1 rates.\", \"The empirical results are similarly promising in terms of problem solving and general reasoning.\"], \"weaknesses\": [\"Whilst the authors do mention that the coalition formation is generally an NP-hard problem, they do not offer any ideas about potential future possibilities that would help with the scalability of the framework\", \"I do not understand the prompt referenced in Appendix A and the corresponding LLM output. The belief model is rather vague, and when looking at the output of the alignment scores it seems a bit arbitrary - e.g. the belief model does not mention using an object oriented approach, but in the alignment score this seems to be highly valued? I am just slightly concerned that some of the alignment scores outputted by the LLMs are not particularly strong signals and ideally it would be measured using something more robust.\", \"Overall, my main concern is the potential scalability of the proposed framework, with firstly the coalition forming being difficult and secondly the requirement to generate beliefs over all other agents. Furthermore, whilst the empirical results are good and I am not downplaying them, I am not convinced the proposed settings are those that can really leverage ToM fully. However, this is not impacting my score.\"], \"questions\": [\"For the insight that low ToM exhibits better cooperation compared to high ToM, I wonder how specific this is to the environment being looked at. For example, the multi-agent programming setting, at least to me, does not strike me as an environment that requires much ToM to successfully cooperate in, therefore low ToM being more successful may simply be due to the lower complexity of using it. Have the authors noticed this same trend in other environments?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Part B Regarding Rematching\", \"comment\": \"> I also appreciated the updates to the paper regarding the algorithm for finding coalitions/teams, though I still have a few concerns. First, the authors state that when agent $i$ signals a desire to rematch (in line 9 of Algorithm 1), the algorithm:\\n> > Triggers a new cooperation round and updates the preference orders based on the latest belief-action alignmentsIf a stable matching is found, proceed with the new coalitionsIf no stable matching is found, keep the current coalitions for one more round\\n> \\n\\n> QB1: Why would the preference orders need to be updated? Why/have have the belief-action alignments been updated?\\n> \\n\\n**R.QB1**.\\nTo clarify, the update preference order is not specific to \\u201crematching\\u201d, its a **standard procedure** for \\u201ca new cooperation round.\\u201d In each new round, the beliefs are updated due to last round updated execution/actions. \\n\\n> QB2: The authors claim that their algorithm computes a stable matching. How then, is the final step ever to be executed? Moreover, even if no stable matching is found, how would running the matching process for one more round help?\\n> \\n\\n**R.QA2**: Stable Matching and One Round Continuation\\n\\n- This one more round serves as an **adaptation period** where alignment scores may naturally improve.\\n\\n**Comprehensive Example**: Let us explain why continuing one more round can help through a detailed example:\\n\\n**Initial State (Round t)**:\\n\\n```\", \"agents\": \"PM, E1, E2, E3\", \"current_coalition\": [\"{PM, E1, E2}\", \"PM's Beliefs & Alignment Scores:\", \"E1: 0.7 (misaligned)\", \"E2: 0.6 (misaligned)\", \"**E1, E2 Rematch signal (broadcast information)**\", \"```\", \"**Why Continue One Round?**\", \"1. **Adaptation Phase (Round t+1)**:\", \"During continued round (given the rematch messages):\", \"E1 & E2 knows his last action causes rematch signal\", \"E1 & E2 may adapt current action\", \"Updated Alignment Scores if actions are adapted:\", \"E1: 0.7 \\u2192 0.4 (improved)\", \"E2: 0.6 \\u2192 0.3 (improved)\", \"2. **Next Matching Attempt (Round t+2)**:\", \"Coalition {PM, E1, E2} now has better alignment\", \"No need to rematch.\", \"**Key Insight**:\", \"Without this continuation round, we would miss the opportunity for cognitive agents to adapt in subsequent rounds. This reflects the **dynamic nature of cognitive agent cooperation**, where temporary stability can lead to improved matching conditions.\", \"We add ***intuition for this rematching design*** in our updated manuscript Lines 303-305, and revise the Algorithm 1 to be more detailed.\"]}", "{\"comment\": \"We sincerely thank reviewer BmBs for highlighting our contribution of combining coalition matching with Theory of Mind.\\n\\n> W1. The calculation of the semantic similarity of beliefs and actions is left to the LLM, this does not lend itself to a general approach as the title alludes to. It is made clear throughout the paper that this is applied to LLMs however and I do not see this as a big weakness, but would like to see this made clear in the title if possible.\\n\\nR1. We appreciate the reviewer's comments about the semantic similarity calculation and its implications for generalizability. \\n \\n- For LLM-based agents, our approach **leverages one of the key advantages** of LLMs: their ability to handle **open-ended trajectories**. The self-evaluation method allows LLMs to assess alignment in complex, unstructured action spaces where traditional similarity metrics might fall short. This is particularly valuable in scenarios involving natural language interactions and creative problem-solving, where the action space cannot be easily enumerated.\\n\\n- For **non-LLM environments**, our matching mechanism could be adapted more straightforwardly due to the typically more **constrained and well-defined action spaces**. For example, in game environments, actions could be discrete choices or continuous control signals. In such cases, belief-action alignment could be computed using **traditional similarity metrics** like Euclidean distance, KL divergence, or task-specific reward functions. While this would require **more careful design of the action space** for each specific task, the computation of alignment scores would be more straightforward and computationally efficient.\\n \\nWe will make this distinction clearer to better reflect the current focus of our work, and we propose modifying the title to explicitly mention LLM-based agents.\\n\\n\\n> W2. In the debate environment the baselines where both affirmative and negative lead to a bias of the affirmative winning 65.45% of the time, that is they are both using the same method. This is a cause for concern that this result may not be robust enough and might simply be taking advantage of this bias, is it possible to show the results the other way around? (With your model placed in the negative.)\\n\\nR2. We thank the reviewer raise a valid concern about the affirmative side bias in our debate experiments. To address this, we conducted additional experiments with our model on the negative side:\\n\\n| Setting | Win Rate |\\n| --- | --- |\\n| No-ToM | 34.55% (65.45% Win Rate for the affirmative side) |\\n| ToM w.o. Matching | 25.45% |\\n| ToM w. Matching (Ours) | **36.36%** |\\n\\n**Remarks**: These results demonstrate that:\\n\\n- The coalition matching mechanism provides **robust benefits regardless of the debate side**.\\n- The improvement over baseline is **consistent with our original findings**.\\n\\n> Minor: The acronym FTM (Fraction of trust members/Frequency of team matching) is used multiple times making some sections difficult to understand.\\n\\nWe will revise the paper to maintain \\\"FTM\\\" only for \\\"Fraction of Trusted Members\\\". \\n\\n> Q1. For a non-LLM environment, how will the matching scores be calculated?\\n\\nPlease refer to our response R1 to W1.\\n\\n> Q2. In the debate environment the baselines where both affirmative and negative lead to a bias of the affirmative winning 65.45% of the time, that is they are both using the same method. This is a cause for concern that this result may not be robust enough and might simply be taking advantage of this bias, is it possible to show the results the other way around? (With your model placed in the negative.)\\n\\nPlease refer to our response R2 to W2.\"}", "{\"comment\": \"Reply to R1:\\nI am happy with this explanation, and would be willing to defend a change to your title to explicitly mention LLMs to align with your content should this be accepted and the title change allowed.\", \"reply_to_r2\": \"Changing the position of the agent to being on the negative and the results shown do match the remarks made, and I believe including this would likely lead to a better understanding of the contribution of the work. \\n\\nWhile this specific debate environment does appear to have a bias for the affirmative side winning, these results do address my concerns about the method leveraging these biases. The findings shown here are consistent with that of the affirmative side, that there is a decrease in win rate with only ToM, and an increase in win rate when both coalition matching and ToM are included.\\n\\nThis shows that ToM is not enough on its own to improve the win rates. In fact, it appears to make it worse in both cases, affirmative or negative.\\nThis highlights the contribution that both coalition matching + ToM are both needed, and that this addition does improve the win rates by a consistent amount over no-ToM in both cases.\", \"minor\": \"Maintaining that FTM is only used for Fraction of Trusted Members is a welcomed change.\"}", "{\"title\": \"Re. B-W2. Strength of Motivating Claim\", \"comment\": \"> B-W2. As a final sub-point on this topic, I suggest that the authors also benchmark against a 0-level PM and against settings where the agents are 1-level or 2-level reasoners, at least for their motivating experiments described in Table 1.\\n> \\n\\n**R.B-W2.** \\nWe sincerely thank the reviewer for the thoughtful feedback. We agree that evaluating different ToM configurations is crucial for examining ToM\\u2019s impacts on cooperation:\\n\\nWe have added comprehensive experiments incorporating **different ToM configurations** for Project Manager (PM) and Engineers (Eng) and track **performance metrics** (Pass@1) to validate cooperation effects (detailed in Appendix F.3 of the updated manuscript).\\n\\nThe following provides a summary of our new evaluation results (key results are presented in Table 2 & 3):\\n\\n### 1. Initial Performance Comparison\\n\\n**Table 1: Initial Pass@1 Scores (Round 1) on HumanEval and MBPP**\\n\\n| PM ToM | Eng ToM | HumanEval | MBPP |\\n| --- | --- | --- | --- |\\n| 0 | 1 | 0.87 \\u00b1 0.01 | 0.525 \\u00b1 0.01 |\\n| 0 | 2 | 0.90 \\u00b1 0.02 | 0.56 \\u00b1 0.01 |\\n| 1 | 1 | 0.90 \\u00b1 0.01 | 0.55 \\u00b1 0.02 |\\n| 1 | 2 | 0.90 \\u00b1 0.02 | 0.56 \\u00b1 0.02 |\\n| 1 | 0 | 0.93 \\u00b1 0.02 | 0.56 \\u00b1 0.01 |\\n| 2 | 0 | 0.90 \\u00b1 0.01 | 0.55 \\u00b1 0.02 |\\n\\n**Key Observation**: Similar initial performance across ToM configurations.\\n\\n### 2. Performance Degradation Without Matching\\n\\n**Table 2: Pass@1 Score Changes Without Matching (Round 1 \\u2192 Round 5)**\\n\\n| PM ToM | Eng ToM | HumanEval Change | MBPP Change |\\n| --- | --- | --- | --- |\\n| 0 | 1 | 0.87 \\u2192 0.83 (\\u21934.6%) | 0.525 \\u2192 0.46 (\\u219312.4%) |\\n| 0 | 2 | **0.90 \\u2192 0.83 (\\u21937.8%)** | **0.56 \\u2192 0.45 (\\u219319.6%)** |\\n| 1 | 1 | 0.90 \\u2192 0.87 (\\u21933.3%) | 0.55 \\u2192 0.50 (\\u21939.1%) |\\n| 1 | 2 | **0.90 \\u2192 0.85 (\\u21935.6%)** | **0.56 \\u2192 0.47 (\\u219316.1%)** |\\n| 1 | 0 | 0.93 \\u2192 0.91 (\\u21932.2%) | 0.56 \\u2192 0.52 (\\u21937.1%) |\\n| 2 | 0 | **0.90 \\u2192 0.85 (\\u21935.6%)** | **0.55 \\u2192 0.49 (\\u219310.9%)** |\\n\\n**Key Finding**: Higher ToM configurations show larger performance drops without matching, supporting our claim that raw ToM capabilities may actually hinder sustained performance.\\n\\n### 3. Recovery with Matching Mechanism\\n\\n**Table 3: Performance Recovery with Matching (Round 5)**\\n\\n| PM ToM | Eng ToM | HumanEval | MBPP |\\n| --- | --- | --- | --- |\\n| 0 | 1 | 0.86 (\\u21913.6%) | 0.46 (\\u00b10%) |\\n| 0 | 2 | **0.87 (\\u21914.8%)** | **0.47 (\\u21914.4%)** |\\n| 1 | 1 | 0.88 (\\u21911.1%) | 0.52 (\\u21914.0%) |\\n| 1 | 2 | **0.88 (\\u21913.5%)** | **0.55 (\\u219117.0%)** |\\n| 1 | 0 | 0.93 (\\u21912.2%) | 0.57 (\\u21919.6%) |\\n| 2 | 0 | **0.96 (\\u219112.9%)** | **0.60 (\\u219122.4%)** |\\n\\n**Key Result**: Our matching mechanism effectively leverages ToM capabilities, **PM(ToM=2) + Eng(ToM=0) with matching achieves best sustained performance.**\"}", "{\"comment\": \"As we approach the end of the discussion period, we wanted to respectfully follow up regarding your concerns.\\n\\nWe have provided detailed responses to address these concerns, including clarifying the ToM inference mechanism, explaining the belief format structure, and discussing our evaluation. \\n\\nPlease let us know if you have any remaining concerns. Thanks again for your thoughtful and constructive feedback making our paper clearer\"}", "{\"title\": \"Summary of rebuttal (including main concerns, our responses and improvements)\", \"comment\": [\"We sincerely thank the reviewer for their thorough and constructive feedback. Below is a comprehensive summary addressing both initial concerns (W1-W5), initial questions (Q1-P2) and follow-up questions (R1-P5, P1-P3).\", \"## 1. Theory of Mind (ToM) Formulation (W1&2, R1&2, P3, Q1)\", \"> **Main Concerns:**\", \">\", \"Deviation from common higher-order ToM definition\", \"Unclear belief representation for different ToM inferences\", \"How does agent i access other agents' hidden mental states?\", \"> **Our Responses:**\", \">\", \"### ToM Definition and Implementation\", \"Clarified recursive ToM formulation for agent i at round R: $ToM\\\\_i^k(o\\\\_i^{1:R}, \\u00e2\\\\_{-i}^{1:R-1}, \\\\\\\\{b^{k-1}\\\\_{i,R}(a_m^R)\\\\\\\\}\\\\_{m\\u2260i}) := b\\\\_{i,R}^k$\", \"No direct access to others' mental states; beliefs derived from observations\", \"The reviewer seems misunderstand the \\u201cother\\u2019s beliefs\\u201d in high level ToM. Recursive reasoning does not rely on \\u201cother\\u2019s actual belief\\u201d.\", \"Demonstrated ToM reasoning through chess game example for agent i:\", \"Level 0: Direct observations of game state and move history\", \"Level 1: \\\"j has seen my aggressive style\\\" (this is agent i\\u2019s belief, not j\\u2019s)\", \"Level 2: \\\"i think j probably thinks I will make another aggressive move\\\"\", \"*Manuscript Updates***:**\", \"Clarify ToM formulation (Pages 4-5)\", \"Clarified belief structure and update mechanisms (Section 4.1)\", \"Added discussion for non-LLM agents computing belief alignment(Appendix A. Remarks)\", \"## 2. Multi-Agent System Design (W3, W5, P1, Q2)\", \"> **Main Concerns:**\", \">\", \"Unclear agent configuration\", \"Questions about prompt dependency\", \"Concerns about experimental fairness\", \"Impact of specialized skill scores\", \"Limited applicability to partially observable environments\", \"> **Our Responses:**\", \">\", \"### System Configuration\", \"Detailed settings across tasks (minimum coalition size is \\u2308N/2\\u2309):\", \"1. Programming Task:\", \"5 agents (1 PM + 4 Engineers)\", \"Coalition size of 3\", \"2. Debate Task:\", \"6 agents (3 per side)\", \"2 active debaters per round in ALL conditions\", \"3. Logical Reasoning:\", \"3 agents (ToM levels 0,1,2)\", \"Coalition size of 2\", \"### Implementation Clarification\", \"Algorithm is a \\\"plug-and-play\\\" mechanism independent of prompts\", \"Information asymmetry is not our current focus\", \"Same matching algorithm applied across different tasks\", \"Fair comparison maintained through consistent agent numbers\", \"*Manuscript Updates***:**\", \"Added case study for specialized scores (Appendix C.1)\", \"## 3. Evaluation and Metrics (W4, R4, P2)\", \"> **Main Concerns:**\", \">\", \"Questions about evaluation metrics\", \"Apparent conflict in cooperation trend results\", \"R4: Concerns about circular reasoning in evaluation metrics\", \"R2: Questions about performance improvements across rounds\", \"> **Our Responses:**\", \">\", \"### Comprehensive Evaluation\"], \"added_performance_analysis_across_tom_configurations\": [\"1. Initial Performance (Round 1):\", \"Similar baseline across configurations\", \"Example: HumanEval scores 0.87-0.93\", \"2. Without Matching (Round 5):\", \"Performance degradation observed\", \"PM(ToM=2) dropped from 0.90 to 0.85 on HumanEval\", \"3. With Matching (Round 5):\", \"PM(ToM=2) + Eng(ToM=0) achieved best results:\", \"HumanEval: 0.96 (+12.9%)\", \"MBPP: 0.60 (+22.4%)\", \"### Response to P2\", \"Clarified our goal is not adding ToM but optimizing existing capabilities\", \"Matching mechanism helps ToM agents to achieve better cooperation outcomes.\", \"*Manuscript Updates***:**\", \"Added comprehensive evaluation results in Appendix F.3\"]}", "{\"summary\": \"This work examines the influence of different levels of Theory of Mind capabilities on collaboration among multiple Large Language Model (LLM) agents. The authors propose an algorithm to enhance LLM agents\\u2019 teamwork performance by matching partners with high belief alignment. While the idea of guiding multi-agent collaboration through ToM and belief alignment is novel, this paper presents the proposed method in a less comprehensive manner, missing many important details. Researchers may encounter difficulties when applying the proposed algorithm in specific scenarios. Additionally, the claimed conclusions do not align well with the empirical results and therefore need further clarification.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The introduction and related work section motivate the research pretty well.\\n\\nThe idea of guiding multi-agent collaboration through ToM and belief alignment is novel. \\n\\nThe authors conduct comprehensive evaluations across diverse task scenarios and base LLMs, presenting both quantitative and qualitative results.\", \"weaknesses\": \"The ToM formulation presented in Section 4.1 deviates from the common definition of higher-order ToM. When conducting recursive ToM inferences at level-k, agents are only given their own belief at level-(k-1) rather than the beliefs of other agents. I recommend that the authors refer to [1] for its definition of higher-order mental state inference.\\n\\nThe proposed alignment measurement in Section 4.2 may not apply to general high-order ToM inferences in multi-agent systems. For example, \\u201cwhat I think A is thinking about B\\u2019s action\\u201d and \\u201cwhat I think C is thinking about B\\u2019s action\\u201d are different 1-level ToM inferences that result in the same alignment measurement as defined in this paper. The authors might want to explicitly define the format of beliefs to clarify the formulation.\\n\\nThe multi-agent setup for each evaluation scenario is not clearly described. It is unclear how many agents are involved, what their action and observation spaces are, and how they collaborate. For instance, the interactive programming scenario appears to be a centralized system with full observation, as the PM is the only agent making decisions and ToM inferences. Then the value of ToM is less salient in such a single-agent system.\\n\\nThe two evaluation metrics are the optimization objectives of the proposed algorithm rather than direct measurements of LLM agents\\u2019 collaboration performance or \\u201ccooperation trends.\\u201d The claim that \\u201cagents with higher ToM capabilities may not necessarily exhibit better cooperative trends\\u201d conflicts with the results shown in Tables 3 and 4, where agents with ToM perform better. I recommend using other metrics, such as task completion rate or efficiency, to provide consistent conclusions and increase criterion validity.\\n\\nThe proposed algorithm is vague and highly dependent on specific prompt design when generalizing to different task scenarios. For instance, what happens when an agent is assigned to cooperate with a given partner (line 8 of Algorithm 1) is not clearly defined for each scenario. This ambiguity could lead to potential bias in evaluations. In the debate case study (i.e., lines 495-497), the ToM with matching condition has two LLM agents forming arguments, while the other conditions only involve one. The performance advantage might be due to the increased number of agents via self-reflection, rather than the proposed matching algorithm.\\n\\n\\n[1] Ying, L., Jha, K., Aarya, S., Tenenbaum, J. B., Torralba, A., & Shu, T. (2024). GOMA: Proactive Embodied Cooperative Communication via Goal-Oriented Mental Alignment. arXiv preprint arXiv:2403.11075.\", \"questions\": \"How reliable are the alignment measurements provided by LLMs?\\n\\nHow are the specialized ability scores used in evaluations, and what is their impact?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary of Rebuttal (including reviewer's key concerns, our responses and improvement)\", \"comment\": [\"We sincerely thank the reviewer for their thorough and constructive feedback. Below is a comprehensive summary of our rebuttal (W stands for weakness given by the reviewer's initial comments, Q stands for questions given follow-up questions):\", \"## 1. Coalition Formation Algorithm Design (A-W1, A-W2, A-W3, QC1, QC2)\", \"> **Reviewer's Concerns**\", \">\", \"Unclear partner set specification and matching process\", \"Underspecified rematching process\", \"Unclear implementation of skill levels\", \"Questions about belief operator notation and implementation\", \"> **Our Responses and Improvements**\", \">\", \"**Coalition Structure**\", \"Clarify minimum coalition size is \\u2308N/2\\u2309\", \"Revise and update the generalized coalition formation in Section 4.2 & 5.2\", \"In Section 4.2, updated preference structure in Eq. 2: $B_i(S) = (1/|S|) \\u2211_{j \\u2208 S} \\u03c6(b^k_i(a_j) - \\u00e2_j)$\", \"In Section 5.2, updated preference structure with specialized skill scores.\", \"Explained tolerance (\\u03b5) implementation for search space reduction (Lines 238-244)\", \"*Manuscript Updates:*\", \"Added generalized formulation for preference ordering (Pages 5-6)\", \"Updated belief operator notation (Algorithm 1): if $\\u2203j \\u2208 \\u03bc(i): \\u2016b_i^k(a_j) - \\u00e2_j\\u2016 > \\u03b5$\", \"Added explanation of the main purpose of tolerance implementation (Lines 238-244)\", \"Provided stability proof and convergence guarantee (Appendix G)\", \"**Rematching Process**\", \"Detailed three-step procedure:\", \"1. Trigger new cooperation round with updated preferences\", \"2. Implement new coalitions if stable matching found\", \"3. Continue current coalition for adaptation if no stable matching\", \"*Manuscript Updates:*\", \"Added detailed rematching in Algorithm 1\", \"Added adaptation period explanation (Lines 303-305)\", \"**Skill Integration**\", \"Clarified skill score (\\u03b1\\u2c7c) source: Agent role based assessment\", \"Focus on task-specific requirements\", \"*Manuscript Updates:*\", \"Added evaluation settings for skill scores in our experiments (Appendix C.1)\", \"## 2. ToM implementation (B-W1, B-W2, D-W1, D-W2)\", \"> **Reviewer's Concerns**\", \">\", \"Questions about ToM implementation accuracy\", \"Limited evaluation of different ToM level combinations and narrow focus on PM's ToM capabilities\", \"Unclear relationship between belief alignment and cooperation\", \"> **Our Responses and Improvements**\", \">\", \"**ToM Implementation**\", \"Demonstrated recursive structure over interaction rounds\", \"Focused on cooperative task settings with shared goals.\", \"Highlighted the core contribution is the coalition formation mechanism as a plug-and-play approach for improving multi-agent cooperation with ToM agents, rather than improving ToM implementation.\", \"Extended evaluation to include multiple ToM level combination\", \"*Manuscript Updates:*\", \"Update and clarigy recursive belief structure formulation (Pages 4-5)\", \"Expanded evaluation results (Appendix F.3) and demonstrated improved code implementation (Pass@1 metrics), better debate win rates and enhanced coordination in both hierarchical and peer-based structures\", \"## 3. Experimental Design and Terminology (C-W1, C-W2, C-W3, QA1-3, QB1-2)\", \"> **Reviewer's Concerns**\", \">\", \"Unclear ToM level assignments\", \"Ambiguous prompt modifications\", \"Unspecified agent numbers and coalition sizes\", \"Questions about terminology and implementation complexity\", \"> **Our Responses and Improvements**\", \">\", \"**Experimental Configuration**\", \"Detailed settings for all tasks (minimum coalition size is \\u2308N/2\\u2309):\", \"1. Programming Task: 5 agents (1 PM + 4 Engineers), coalition size 3. Clarified baseline implementation without sampling or selection\", \"2. Debate Task: 6 agents (3 per side), coalition size 2 per team\", \"Confirmed ToM level 0 for negative side\", \"Added symmetric evaluation\", \"3. Logical Reasoning: 3 agents (ToM levels 0,1,2), coalition size 2\", \"*Manuscript Updates:*\", \"Symmetric evaluation for the debate task was added in Section 6.4.\", \"**Terminology and implementation complexity**\", \"Clarified prompt structure:\", \"Base prompt remains constant\", \"Only teammate list updates after coalition formation\", \"Coalition formation occurs in the first cooperation round when rematching is true\", \"*Manuscript Updates:*\", \"Clarified the implementation details in Algorithm 1.\", \"Indicated \\\"team selection\\\" related to \\\"coalition formation\\\" in our manuscript.\"]}", "{\"title\": \"Part C. Regarding details for algorithm and notations\", \"comment\": \"> I appreciated the point about using the $\\\\epsilon$ tolerance to reduce the search space of coalitions, but I would suggest the authors to explicitly mention what happens in this instance in the paper.\\n> \\n\\nThanks the reviewer\\u2019s suggestion. We have revised our updated manuscript page 5 Lines 238-244.\\n\\n> QC1: Finally, in equation (2) the the belief operator $b^k_i$ applies to a single agent's action $a_j$ (and similarly in line 3 of Algorithm 1). In line 9 of algorithm 1, however it is applied to $a_{\\\\mu(i)}$, which presumably refers to the joint action of the agents $j \\\\in \\\\mu(i)$. Can the authors please explain what is going on here?\\n> \\n**R.QC1**:\\nThanks for your thoughtful comments on the belief operator notation. To clarify:\\n\\n\\n- LLM agents can actually process beliefs about multiple agents simultaneously and output agent-specific beliefs\\n - Example output for belief in Appendix D (Debate case study):\\n \\n {\\n \\\"belief\\\": \\\"{Hearing my points, **teammate 1** may pivot to arguing that the death penalty is not an effective use of government resources. **Teammate 2** will likely reinforce my point about racial disparities in death\\n sentences.}\\\"\\n }\\n }\\n \\n- We agree that our notation could be more precise. We have updated the notation as follows:\\n - Line 9: if $\\u2203j \\u2208 \\u03bc(i): \\u2016b\\u1d4f\\u1d62(a\\u2c7c) - \\\\hat{a}\\u2c7c\\u2016 > \\u03b5$ (highlighted in purple color)\\n\\n > QC2: Why not just just $B_i((\\\\mu(i))$ on line 9?\\n > \\n \\n **R.QC2**: We deliberately use individual agent belief comparisons rather than B\\u1d62(\\u03bc(i)) for rematch signaling which provides finer-grained control over coalition dynamics as mentioned in our reply **R.QB2**. For example:\\n \\n Coalition {PM, E1, E2}:\\n \\n - If $b^k_{PM}(a_{E1})$ exceeds \\u03b5 but $b^k_{PM}(a_{E2})$ doesn't\\n - PM can **signal rematch specifically** regarding E1 though broadcast communication\\n - E1 may adapt his action this in new round (the one more round). This potential adaptation could reduce the complexity for coalition/team formation.\\n\\n\\nWe sincerely thank again the reviewer for the thoughtful comments, detailed questions, and the effort they put into providing valuable feedback to improve our work!\"}", "{\"title\": \"Close to the end of discussion December 2nd\", \"comment\": \"We sincerely appreciate your time and effects during the review and rebuttal period.\\n\\nAs we approach the end of the rebuttal period on December 2nd, we wanted to follow up our previous discussion. We hope our response has adequately addressed your concerns. If any points require further clarification, we would be grateful for your feedback. Thank you again for your consideration throughout this process.\"}", "{\"comment\": \"> W5. The proposed algorithm is vague and highly dependent on specific prompt design when generalizing to different task scenarios. For instance, what happens when an agent is assigned to cooperate with a given partner (line 8 of Algorithm 1) is not clearly defined for each scenario. This ambiguity could lead to potential bias in evaluations. In the debate case study (i.e., lines 495-497), the ToM with matching condition has two LLM agents forming arguments, while the other conditions only involve one. The performance advantage might be due to the increased number of agents via self-reflection, rather than the proposed matching algorithm.\\n\\nR5. We sincerely thank the reviewer for constructive feedback, and we provide the following two aspects to clarify:\\n\\n 1. Regarding Algorithm Design and Implementation:\\n\\n > The proposed algorithm is vague and highly dependent on specific prompt design when generalizing to different task scenarios.\\n\\n This is a misunderstanding of our approach \\u201cheavily dependent on prompt design\\u201d. Our algorithm is actually a **\\\"plug-and-play\\\" coalition formation mechanism** that operates **independently of the specific prompt design** or task scenarios. Specifically:\\n \\n - The algorithm's core functionality (belief updating, preference ordering, and matching) remains consistent across different tasks\\n - Line 8 of Algorithm 1 (\\\"Cooperate with assigned partner\\\") refers to the standard interaction protocol of the underlying task, **not a prompt-dependent process**\\n - For example, the same matching algorithm is applied in both our programming and debate case studies despite their very different underlying tasks and prompts. This demonstrates the generality of our approach.\\n \\n 2. Experimental Setting\\n \\n > The performance advantage might be due to the increased number of agents via self-reflection, rather than the proposed matching algorithm.\\n\\n This is a misreading of using \\\"only one agent\\\" in other conditions in our experimental setup. \\n \\nTo clarify, ALL conditions use TWO debaters per round:\\n \\n - No-ToM (Baseline): \\\"For each speech, two debaters were randomly selected from each side\\\" (line 485)\\n - ToM without Matching: \\\"For each speech, two debaters were randomly selected from each side\\\" (Line 493-494)\\n - ToM with Matching: Two debaters selected via our coalition formation mechanism (Line 497)\\n \\nThis experimental design ensures a direct, fair comparison by maintaining the **same number** of active debaters (two) across all conditions. The only difference is the **selection method** - random versus our matching algorithm. \\n\\nTherefore, our performance improvements cannot be attributed to \\\"increased number of agents\\\" as the reviewer suggests, but rather to our algorithm's ability to select more effective agent combinations from the same pool of candidates.\"}", "{\"title\": \"Re. D. Game-Theoretic Reasoning and Precision of Claims\", \"comment\": \"> D-W1. A key example is the authors' claim that the so-called \\\"Fraction of Trust Members (FTM)\\\" is a good measure of what they term the \\\"cooperative trend\\\" (N.B. to be grammatical, this should probably be \\\"Trusted\\\" not \\\"Trust\\\", though it is not actually clear what the relevance of the concept of \\\"trust\\\" even is here). But belief alignment by itself does not imply higher levels of cooperation. I may have perfectly accurate beliefs about what you are going to do in a two-player zero-sum game (where cooperation is definitionally impossible). Thus, it is clearly not true that in general \\\"a higher FTM value [indicates] a more cooperative agent\\\", as claimed in line 410/411.\\n> \\n\\n**R.D-W1.** \\n\\n1. While we agree that belief alignment alone doesn't guarantee cooperation in all settings (e.g., zero-sum games), our work specifically focuses on cooperative task settings where agents share common goals (software development, collaborative debate). In these contexts, our empirical results demonstrate that higher belief alignment (optimized by our matching mechanism) consistently correlate with better team performance:\\n - In programming tasks, the team achieved better code implementation (Pass@1 metrics in Table 3);\\n - In debate tasks, teams achieved better win rates (Table 4)\\n2. We agree that robust evaluation is crucial for validating our claims, and we evaluate performance metrics for examining ToM\\u2019s impacts on cooperation:\\n - **In our original manuscript**, we have examined how \\\"Raw ToM capabilities alone may not improve cooperation\\\" in Appendix F, including:\\n - DyLAN with ToM on coding tasks, using *important score* defined by DyLAN (Appendix F.1)\\n - ChatEval with ToM on logic problem-solving and general reasoning, using *accuracy* (Appendix F.2)\\n - **New Results**. We have added comprehensive experiments incorporating **different ToM configurations** for Project Manager (PM) and Engineers (Eng) and track **performance metrics** (Pass@1) to validate cooperation effects (detailed in Appendix F.3 of the updated manuscript and in our **response R.B-W2 for different ToM configurations**.)\\n\\n> D-W2. Relatedly, the authors talk about ToM improving cooperation but is more about ToM improving the facilitation/management skills of a single PM agent. This is also a very interesting and valid topic of study, but I suggest the authors change the phrasing slightly throughout the paper to better reflect the rather narrow form of cooperation problem they consider. Indeed, my understanding is that the authors largely focus on the case where only one agent (a PM) is imbued with ToM.\\n> \\n\\n**R.D-W2.**\\n\\n1. Our focus on the PM's ToM capabilities was an intentional design, where effective leadership often requires sophisticated perspective-taking abilities. \\n - This approach mirrors real-world hierarchical team structures where managers coordinate multiple contributors.\\n - In our updated experimental results (appendix F.3 or our response R.B-W2.), **the combination of PM(ToM=2) + Eng(ToM=0) with matching consistently achieves the best sustained performance**.\\n2. Besides this setting, our debate setting (Section 6.4) demonstrates much **broader settings** in cooperation:\\n - **All debaters possessed ToM capabilities** (0-2 level ToM respectively), not just a single leader\\n - **Peer-to-Peer Cooperation**: Debaters coordinated as equals within teams, showing our framework's effectiveness beyond hierarchical structures\\n - Performance Benefits: Teams with our coalition matching mechanism achieved higher win rates (67.27% vs 61.82% w.o matching), showing improved cooperation regardless of organizational structure.\\n\\nWe sincerely thank the reviewer again for this constructive feedback, and we revised our manuscript accordingly, highlighted in purple color.\"}", "{\"title\": \"Part A: Regarding coalition algorithm and baseline setting\", \"comment\": \"We sincerely thank the reviewer for their thoughtful comments, detailed questions, and the effort they put into providing valuable feedback to improve our work.\\n\\n> QA1: First, and this is very minor, if there is only ever one coalition that is used in solving the task, I suggest that the authors drop the phrase \\\"coalition formation\\\" and replace it with something more fitting such as \\\"team selection\\\". If nothing else they should state very clearly very early on in the paper that when they talk about coalition formation, it is about selecting a subgroup of the agents to complete the task,\\u00a0*not*\\u00a0about partitioning the agents (all of whom then complete the task).\\n> \\n\\n**R.QA1**: We have highlighted the *\\u201cteam selection\\u201d* in the Abstract, Introduction, and Section 5.1.\\n\\n> QA2: The algorithm for finding multiple coalitions now seems like overkill, because it sounds like it is only required to output a single coalition of a size greater than some constant\\u00a0`#min_coalition_size`\\u00a0where all the agents in the coalition have sufficiently strong belief alignment (with the option to also incorporate skill levels). Do the authors have an argument for why we can't do something simpler?\\n> \\n\\n**R.QA2**: We appreciate the reviewer's insightful suggestion about algorithm simplification. \\n\\n- We acknowledge that a greedy approach would be simpler and could potentially achieve comparable results.\\n- In this work, we consider to use multiple coalitions during the search process for **optimality**, as illustrated by this example:\\n - Agents: {A, B, C, D, E}\\n - `#min_coalition_size` = 3\\n - Bilateral belief-alignment scores (lower is better):\\n\\n```\\nA \\u2194 B: (0.2, 0.3) B \\u2194 C: (0.3, 0.2) C \\u2194 D: (0.1, 0.2)\\nA \\u2194 C: (0.3, 0.4) B \\u2194 D: (0.4, 0.3) C \\u2194 E: (0.4, 0.3)\\nA \\u2194 D: (0.4, 0.3) B \\u2194 E: (0.2, 0.4) D \\u2194 E: (0.3, 0.2)\\nA \\u2194 E: (0.1, 0.5)\\n```\\n\\n**Simple Greedy Approach:**\\n\\n1. Start with A's best local preference:\\n - $B_A({A,E})$ = 0.1\\n - Add B: $B_A({A,E,B})$ = 0.15\\n - Greedy Approach finally get coalition {A,E,B} forms with scores:\\n - $B_A({A,E,B})$ = 0.15\\n - $B_E({A,E,B})$ = 0.45\\n - $B_B({A,E,B}$) = 0.25\", \"average\": [\"0.25\", \"This example our approach is necessary to identify coalitions with optimality, even though we output a single coalition.\", \"Notably, in our implementation, the coalition formation is only **happens in the first cooperation round and rematching is true (for evaluating the lifetime of the coalition mentioned in Line 373)**. We apologize for any confusion in our original manuscript and we have **updated** the description to accurately reflect this implementation detail (Algorithm 1 highlighted in purple color).\", \"> QA3: In the experimental baselines, which agents solve the task? E.g. if there is one PM and five engineers, do all five solve the task, or is some random sub-group of size\\u00a0`#min_coalition_size`\\u00a0selected to solve the task, or something else? This is also important if task skills are incorporated into coalition formation, because if this consideration is not taken into account in the baseline then it is unclear whether simply selecting the most skilled agents is actually what helps them to perform the task better.\", \">\", \"R.QA3:\", \"For the example mentioned in QA3, the baseline consist of five engineers, it does **not sample or select**.\", \"The specialized score affect the project engineer only in our TASK 1 evaluation.\", \"As mentioned in **R.A-W1**, we discuss the specialized scores in PM and Eng setting in Appendix C.1, where specialized ability scores **primarily influence the PM**, since effective leadership and coordination capabilities are crucial (as also evidenced by extensive evaluation in Appendix F.3)\", \"Thus, the TASK 1\\u2019s baseline ensures that no unfair advantage is conferred by selecting the most skilled agents.\", \"Finally, we have revised our manuscript to **explicitly point out** this baseline setting in Line 409 of the updated manuscript.\"]}", "{\"comment\": \"We thank reviewer P68Z's recognition of our paper's clear motivation, novelty and promising results.\", \"we_address_specific_concerns_below_point_by_point\": \"> W1. The ToM formulation presented in Section 4.1 deviates from the common definition of higher-order ToM. When conducting recursive ToM inferences at level-k, agents are only given their own belief at level-(k-1) rather than the beliefs of other agents. I recommend that the authors refer to [1] for its definition of higher-order mental state inference.\\n\\nR1. We would like to thank the reviewer\\u2019s suggestion. We want to clarify that our formulation does align with the definition of higher-order ToM and [1].\\n\\n- In our paper, lines 286-288 explicitly state that: $k$-level ToM function \\\\text{ToM}_i^k(\\\\cdot) to form beliefs $b_i^k$ about the mental states of other agents, based on its observations $o_i$, the actions a_{-i} of **others**, and the $(k-1)$-level beliefs $b_{-i}^{k-1}$ of others.\\n\\n- The observations and other agents\\u2019 actions are **cumulated** by recursive time.\\n\\nTo make this clearer, we can expand our formulation to explicitly show the recursive time steps t.\\n\\n> W2. The proposed alignment measurement in Section 4.2 may not apply to general high-order ToM inferences in multi-agent systems. For example, \\u201cwhat I think A is thinking about B\\u2019s action\\u201d and \\u201cwhat I think C is thinking about B\\u2019s action\\u201d are different 1-level ToM inferences that result in the same alignment measurement as defined in this paper. The authors might want to explicitly define the format of beliefs to clarify the formulation.\\n\\nR2. As shown in Equation (1), our formulation can indeed express and distinguish between different high-order ToM inferences:\\n \\n$ToM_i^k(o_i, a_{\\u2212i}, b_{\\u2212i}^{k\\u22121}) := b_i^k$\\n \\nLet's use the reviewer's example to demonstrate how our formulation handles different ToM inferences:\\n \\n 1. \\\"What agent i thinks A is thinking about B's action\\\":\\n \\n - This involves $b_i^2$ where i is reasoning about A's belief ($b_A^1$) about B's action\\n \\n - The nested belief structure captures: i \\u2192 A \\u2192 B\\n2. \\\"What agent i thinks C is thinking about B's action\\\":\\n \\n - This involves a different $b_i^2$ where i is reasoning about C's belief ($b_C^1$) about B's action\\n \\n - The nested belief structure captures: i \\u2192 C \\u2192 B\\n \\n**These two are distinct in our formulation** because:\\n \\n - $b_{-i}^{k\\u22121}$ includes the different (k-1)-level beliefs of agents A and C\\n - The resulting $b_i^k$ maintains these **distinct** belief.\\n\\n> W3. The multi-agent setup for each evaluation scenario is not clearly described. It is unclear how many agents are involved, what their action and observation spaces are, and how they collaborate. For instance, the interactive programming scenario appears to be a centralized system with full observation, as the PM is the only agent making decisions and ToM inferences. Then the value of ToM is less salient in such a single-agent system.\\n\\nR3. While the programming task had a hierarchical structure, **Section 6.4** describes our debate setting which demonstrates full multi-agent dynamics:\\n \\n - Multiple agents (6 debaters) with ToM capabilities\\n - Peer-to-peer interactions within teams\\n \\n> W4. The two evaluation metrics are the optimization objectives of the proposed algorithm rather than direct measurements of LLM agents\\u2019 collaboration performance or \\u201ccooperation trends.\\u201d The claim that \\u201cagents with higher ToM capabilities may not necessarily exhibit better cooperative trends\\u201d conflicts with the results shown in Tables 3 and 4, where agents with ToM perform better. I recommend using other metrics, such as task completion rate or efficiency, to provide consistent conclusions and increase criterion validity.\\n\\nR4. \\n**Results and Core Contribution:**\", \"the_apparent_conflict_the_reviewer_notes_actually_demonstrates_our_key_finding\": [\"**Raw ToM capabilities alone may not improve cooperation** (shown in earlier results),\", \"**But** our **matching mechanism effectively leverages ToM to enhance team performance** (Tables 3 and 4).\", \"This is precisely why our core contribution - the coalition formation mechanism - is valuable:\", \"Without matching: Higher ToM didn't necessarily improve cooperation\", \"With matching: Significantly improved performance across tasks by effectively leveraging the high ToM agents\", \"**We have utilized different metrics for different purposes:**\", \"FTM measures raw ToM effects on cooperation\", \"**Task-specific metrics** (Pass@1, win rates) validate practical benefits for cooperation.\", \"Coalition stability shows sustained cooperation improvements\"]}", "{\"summary\": \"This paper studies the concept of theory of mind (ToM) in the context of one LLM agent forming and managing coalitions of other LLM agents. They show that prompting the first LLM agent to engage in 2-level reasoning about the others' beliefs can actually _hinder_ performance compared to 1-level reasoning (cf. the general concept of k-level reasoning). They introduce a method of comparing and matching agents based on their ability to predict each other's actions. Their experiments study how the use of that metric in forming coalitions of LLM agents impacts the agents' ability to solve problems in the domains of programming, logic, and general reasoning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The questions underlying this paper are certainly interesting. While much work has focused on the improved cooperation using ToM, few (to my knowledge) have investigated how this might hinder cooperation, at least not (again, to my knowledge) in the context of LLM agents. More generally, the issue of intelligent coalition formation in this context is an interesting problem and one that I believe will have increasing real-world relevance in the coming years. The idea of alignment of beliefs in order to solve this problem is natural and original (again, that is to the best of my knowledge, though I also would not be at all surprised if a version of this had been studied before in the game theory or multi-agent systems literatures, outside the context of LLMs). I also appreciated the effort the authors put in to studying a relatively wide variety of tasks, models, and frameworks for multi-LLM problem-solving. The presentation of their results was largely clear.\", \"weaknesses\": [\"Unfortunately, despite its positive aspects, I think the paper does have some significant issues. In what follows I have attempted to cluster these and order them approximately by importance.\", \"**Matching Algorithm Confusions**\", \"The matching algorithm seems underspecified in several places, and it is not always clear what the authors are actually doing. Moreover, I didn't feel that the underlying theoretical principles were always appropriate. More concretely:\", \"The authors start by talking about the set of an agent $i$'s partners $\\\\mu(i)$, but based on their matching algorithm they seem to implicitly assume that $\\\\mu(i)$ is always a singleton (see Equation 2). Otherwise, matchings are stable only when player $i$ would not prefer to be partnered with $j$ over its current _set_ of partners $\\\\mu(i)$. But what would that mean? Are preferences between sets of agents instead of single agents? If so, the relevant comparison would presumably be that agent player $i$ would not prefer to be partnered with any _set_ of other agents over its current _set_ of partners $\\\\mu(i)$.\", \"In line 9 of algorithm 1, what happens after agents signal a desire to re-match? What if the belief misalignment measure is greater than the tolerance $\\\\epsilon$ for all other agents? Does the agent end up in a singleton coalition? The authors state that the iterative process of coalition formation ends in a stable matching but they do not actually prove this. Especially with the introduction of preferences based on differing skills (see the next point), I actually suspect that it would be trivial to create a cyclic matching problem.\", \"The preference order described in equation 4 are based on agents having different skill levels $\\\\alpha_i$ on different tasks, but where do these skill levels come from? More importantly, if the point is to match agents with complementary skills, why does the matching algorithm only compare agents' skills on a _single_ task?\", \"Minor: the authors say on line 244 that the alignment between beliefs and actions is not mathematical subtraction, despite them denoting it that way. I would strongly suggest not denoting it using subtraction to begin with and being more explicit about what the distance measure here actually is.\", \"**Strength of Motivating Claim**\", \"The authors' motivating claim is that lower-level ToM abilities may improve the ability of agents to cooperate beyond higher-level ToM abilities. Their justification for this is a setting where one agent -- a \\\"Project Manager\\\" (PM) -- is instructed to use either 1-level or 2-level reasoning to organise several other agents (all of which are instructed use 0-level reasoning). But this essentially means that in the latter case the PM is being instructed to reason about the agents acting in a way that they do not in fact act. Explained this way, it is still somewhat interesting but by no means surprising that the PM is less successful when prompted to reason using higher-level ToM. Essentially, $k$-level reasoners are designed to best respond to $(k-1)$-level reasoners, not $(k-2)$-level reasoners.\", \"Relatedly, looking through the actual LLM outputs included in the appendices, the level-2 ToM responses seem quite strange. They are worded as if they are predicated on the other agents _actually observing_ actions in advance, rather than _anticipating_ instructions. I am not really sure what is going on here, but reading through it was not at all surprising that the higher ToM agents performed less well on the task, as they appeared to be being mis-instructed.\", \"As a final sub-point on this topic, I suggest that the authors also benchmark against a 0-level PM and against settings where the agents are 1-level or 2-level reasoners, at least for their motivating experiments described in Table 1.\", \"**Missing Experimental Details**\", \"There are several (relatively minor) aspects missing from the discussion and presentation of the experiments that, if present, would improve the paper.\", \"There are no error bars or reports of standard errors for the experimental results, making it difficult to interpret their statistical significance.\", \"I assume the ToM level for debating agents arguing for the negative side is 0, but it would be good to clarify this.\", \"Once coalitions are formed, how do the prompts/instructions given to the agents in different coalitions actually change?\", \"How many agents are actually present in the various settings, and what are the sizes of the coalitions that are formed?\", \"**Game-Theoretic Reasoning and Precision of Claims**\", \"This is a relatively minor, but a few times I found myself slightly frustrated by the authors claims, which I believe did not fully take into account the relevant game-theoretic concepts (see also the confusing use of what appears to be a binary matching algorithm for n-player coalition formation, described further above).\", \"A key example is the authors' claim that the so-called \\\"Fraction of Trust Members (FTM)\\\" is a good measure of what they term the \\\"cooperative trend\\\" (N.B. to be grammatical, this should probably be \\\"Trusted\\\" not \\\"Trust\\\", though it is not actually clear what the relevance of the concept of \\\"trust\\\" even is here). But belief alignment by itself does not imply higher levels of cooperation. I may have perfectly accurate beliefs about what you are going to do in a two-player zero-sum game (where cooperation is definitionally impossible). Thus, it is clearly not true that in general \\\"a higher FTM value [indicates] a more cooperative agent\\\", as claimed in line 410/411.\", \"Relatedly, the authors talk about ToM improving cooperation but is more about ToM improving the facilitation/management skills of a single PM agent. This is also a very interesting and valid topic of study, but I suggest the authors change the phrasing slightly throughout the paper to better reflect the rather narrow form of cooperation problem they consider. Indeed, my understanding is that the authors largely focus on the case where only one agent (a PM) is imbued with ToM.\"], \"questions\": \"Please see the Weaknesses section for my questions. I also welcome the authors to correct any misunderstandings I may have about their paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Regarding Questions\", \"comment\": \"> Q1. How reliable are the alignment measurements provided by LLMs?\\n\\n This LLM self-evaluation is widely used in existing literature (Qin et al., 2023; Zheng et al., 2023; Liu et al., 2024). We acknowledge that LLM-based measurements have inherent limitations. \\n\\nHowever, we want to emphasize that our core contribution is the coalition formation mechanism as a plug-and-play approach for improving multi-agent cooperation, rather than advancing LLM evaluation methods.\\n \\n- For non-LLM agents, computing semantic similarity between **structured trajectories** would be more straightforward than natural language comparison.\\n \\n- We will discuss our framework to non-LLM agents: Using predefined trajectories (state-action sequences) instead of natural language and applying established trajectory similarity metrics (e.g., cosine similarity on state-action embeddings).\\n\\n> Q2. How are the specialized ability scores used in evaluations, and what is their impact?\\n\\n We thank the reviewer for this comment and will enhance the paper with more detailed explanation about evaluation settings.\\n \\nIn the primary benchmarks (HUMANEVAL and MBPP), specialized ability scores played a limited role because:\\n \\n - These tasks primarily involve implementing single functions with well-defined requirements\\n - They don't require diverse technical specializations that would significantly benefit from specialized role matching\\n \\nIn our evaluation, \\n \\n - We primarily employed **specialized ability scores in the project manager role**, where:\\n * The PM needs to coordinate and oversee the entire development process, which ensures the coalition includes effective leadership and coordination\\n * For debate and logical reasoning tasks: We **didn't explicitly use specialized ability scores** as these tasks don't require distinct technical specializations.\\n \\n**Case Study: Complex Software Development**\\n \\nTo better demonstrate the impact of specialized ability scores, we propose adding a case study \\\"*implementing a 2048 game*\\\", which requires diverse technical skills:\\n \\n```\\nspecialized_scores = {\\n 'UI_Engineer': {'frontend': 0.9, 'backend': 0.3},\\n 'Backend_Engineer': {'frontend': 0.2, 'backend': 0.8},\\n 'FullStack_Engineer': {'frontend': 0.6, 'backend': 0.6},\\n }\\n```\\n \\n- **Preliminary Results:**\\n - Without specialized scoring: 65% task completion rate\\n - **With specialized scoring**: 82% task completion rate\"}", "{\"comment\": \"We sincerely thank you for your positive feedback on our manuscript, and greatly appreciate your careful consideration of our responses and the additional experimental results we provided.\\n\\nAs suggested, we have explicitly mentioned LLMs in our title to better align with the paper's content. In our updated manuscript, the title is modified to \\\"Cognitive Insights and Stable Coalition Matching for Fostering **LLM-based** Multi-Agent Cooperation.\\\"\\n\\nGiven your positive assessment of how we've addressed your concerns, we respectfully ask if you would consider raising your score?\\n\\nThank you again for your detailed review and for helping us improve the clarity and rigor of our work!\"}", "{\"comment\": \"Thanks for providing the revised ToM definition and new experimental results. However, my concerns remain as follows:\\n\\nThe revised definition is much clearer than the original one presented in the paper. However, it seems to be limited to specific scenarios and ToM inferences, and therefore lacks generalizability. For example, the proposed ToM formula does not apply to partially observable environments where agents do not always have access to other agents' actions. Similarly, the higher-order ToM definition is limited to agent i's prediction of agent j's action (in some sense, inferring the intention), while ignoring other types of mental state inferences (e.g., belief, desire). For example, the current 2nd-order inference would be \\\"how i thinks about j's action, given i's 1st-order inference of j's action.\\\" Other 2nd-order nested ToM reasoning processes commonly discussed in the literature, such as \\\"how i thinks j is thinking about i's action,\\\" cannot be represented in the proposed definition.\\n\\nThe newly added experimental results still do not support the claimed conclusions. Most conditions in Tables 2 and 3 are worse than those in Table 1, meaning that introducing ToM with an additional 4 rounds of interaction and the matching mechanism does not improve team performance. The only exception is the last condition in which the claimed degradation and recovery are observed. More experiments are needed to explain the divergence in performance.\\n\\nGiven the above considerations, I have decided to keep my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Re. A-W3. Matching Algorithm Confusions\", \"comment\": \"> A-W3. In line 9 of algorithm 1, what happens after agents signal a desire to re-match? What if the belief misalignment measure is greater than the tolerance $\\\\epsilon$ for all other agents? Does the agent end up in a singleton coalition? The authors state that the iterative process of coalition formation ends in a stable matching but they do not actually prove this. Especially with the introduction of preferences based on differing skills (see the next point), I actually suspect that it would be trivial to create a cyclic matching problem.\\n> \\n\\n**R.A-W3.** We thank the reviewer\\u2019s detailed comments. We will address each question one by one.\\n\\n> In line 9 of algorithm 1, what happens after agents signal a desire to re-match? \\nWhen agents signal a desire to re-match, our algorithm:\\n> \\n- Triggers a new cooperation round and updates the preference orders based on the latest belief-action alignments\\n- If a stable matching is found, proceed with the new coalitions\\n- If no stable matching is found, keep the current coalitions for one more round\\n\\n> What if the belief misalignment measure is greater than the tolerance $\\\\epsilon$ for all other agents? Does the agent end up in a singleton coalition?\\n> \\n\\n**For handling Universal Misalignment**: \\n\\n- The primary goal to use $\\\\varepsilon$ is to **reduce the search space**\\n\\nWhen belief misalignment exceeds $\\\\varepsilon$ for all potential partners, our implementation:\\n\\n- The preference order can still established only by relying on the alignment score.\\n- We also use #min_coalition_size (default is $\\\\lceil N/2 \\\\rceil$) so that at least #min_coalition_size agents are in the coalition. This is the higher priority than \\u201creducing the search space\\u201d. The tolerance then will be adjusted.\\n\\n**Comprehensive Example.** We use the following example to illustrate: \\n\\n- Consider 6 agents: {A, B, C, D, E, F}, #min_coalition_size=2, Initial tolerance $\\u03b5$ = 0.3, and Belief-Action Alignment Scores as follows:\\n \\n ```\\n A's scores:\\n A -> B: 0.1 (good alignment)\\n A -> C: 0.2 (good alignment)\\n A -> D: 0.25 (good alignment)\\n A -> E: 0.4 (poor alignment)\\n A -> F: 0.45 (poor alignment)\\n \\n B's scores:\\n B -> C: 0.15 (good alignment)\\n B -> D: 0.35 (poor alignment)\\n B -> E: 0.1 (good alignment)\\n B -> F: 0.2 (good alignment)\\n ```\\n \\n- Case 1: Using \\u03b5 to Reduce Search Space\\n 1. For agent A:\\n - Only consider partners with scores \\u2264 \\u03b5 (0.3)\\n - Candidate pool: {B, C, D}\\n - Reduces from 10 possible 3-agent combinations to only 1: {A,B,C}\\n 2. For agent B:\\n - Candidate pool based on \\u03b5: {C, E, F}\\n - Significantly reduces coalition possibilities\\n- Case 2: Without Using \\u03b5 (Just Sorting)\\n - Must evaluate all possible 3-agent combinations\\n - For just agent A: Need to check all combinations:\\n {A,B,C}, {A,B,D}, {A,B,E}, {A,B,F}, {A,C,D}, {A,C,E}, etc.\\n- **Key Insight**\\n - With \\u03b5: O(k) comparisons where k is number of agents within tolerance\\n - Without \\u03b5: O(n choose #min_coalition_size) comparisons where n is total number of agents\\n\\n> The authors state that the iterative process of coalition formation ends in a stable matching but they do not actually prove this. Especially with the introduction of preferences based on differing skills (see the next point), I actually suspect that it would be trivial to create a cyclic matching problem.\\n> \\n\\nWe appreciate the reviewer's concern about potential cyclic preferences in our coalition formation mechanism. \\n\\nFor the revised generalized formulation of **many-to-many matching (page 5 & 6),** let us provide a rigorous proof of stability and acyclicity:\\n\\n- **Key Properties**\\n 1. **Property 1 (Well-Defined Hybrid Scoring):** For any coalition S and agent i:\\n \\n $B'_i(S) = B_i(S) + \\\\lambda \\\\cdot \\\\alpha(S)$\", \"where\": \"- $B_i(S)$: belief-action alignment (continuous)\\n - $\\\\alpha(S)$: specialized ability score (bounded)\\n - $\\\\lambda$: scaling factor\\n 2. **Property 2 (Strict Preference Ordering):** For coalitions S\\u2081, S\\u2082: $S_1 \\\\succ_i' S_2 \\\\iff B'_i(S_1) < B'_i(S_2)$ \\n- **Proof of Acyclicity**\\n \\n **Theorem 1:** The preference structure cannot create cycles.\\n \\n *Proof by Contradiction:*\\n \\n 1. Assume cycle exists: $S\\u2081 \\\\succ_i' S\\u2082 \\\\succ_i' S\\u2083 \\\\succ_i' S\\u2081$\\n 2. By definition:\\n - $B'\\u1d62(S\\u2081) < B'\\u1d62(S\\u2082)$\\n - $B'\\u1d62(S\\u2082) < B'\\u1d62(S\\u2083)$\\n - $B'\\u1d62(S\\u2083) < B'\\u1d62(S\\u2081)$\\n 3. Transitivity of real numbers implies:\\n $B'\\u1d62(S\\u2081) < B'\\u1d62(S\\u2081)$ \\u2192 Contradiction\\n- **Convergence to Stable Matching**\\n \\n **Theorem 2:** The iterative process converges to a stable matching.\", \"the_stable_matching_is_guaranteed_by\": [\"Finite improvement path\", \"Strict preference ordering\", \"Size constraint k prevents degenerate solutions\", \"According to the reviewer's feedback, we provided the completed proof in Appendix G of the updated manuscript.\"]}", "{\"summary\": \"The authors present a method for using using Theory of Mind (ToM) and a coalition matching algorithm to allow LLM agents (using various LLM models) to cooperatively perform tasks in environments such as:\\n\\n-Iterative Programming (HumanEval, MBPP)\\n\\n-Debate (Two cooperative teams compete, with affirmative team taking on various forms of the model (no-ToM, ToM without matching, ToM with matchin), and negative team takes the baseline no-ToM)\\n\\n-Logical and General Reasoning (Using AQUA-RAT and MMLU datasets)\\n\\nThe k-level ToM is set to take in an observation, the action of all agents at the previous timestep and the belief of the actions of all agents at the previous timestep, at the 0-level this is set to start with no belief. These are open-ended action spaces defined by natural language, and the observations, actions and beliefs are textual outputs. (The prompts of these are demonstrated in the appendix)\\n\\nThe Matching coalition algorithm takes a set of LLM agents and the possible matchings of these agents. It then assigns a preference order of these matchings. It aims to create stable matchings based on this preference order, such that agent i prefers agent j over all other pairings and agent j prefers agent i and neither agent has incentive to deviate from this pairing. A specific rule for this preference order is define based on the alignment, based on semantic similarity as calculated by the LLM, between beliefs of actions and the actual actions, and the agents are only matched if this is above a certain threshold.\\n\\nThe results show that without matching lower ToM levels have higher cooperative trends, while with matching higher ToM levels have better cooperative trends. In all shown environments the ToM w. Matching (their method) outperforms the baselines of no-ToM, or ToM w.o. matching.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is easy to follow, with the appendix clearly aiding in understanding how the models function.\\n\\nThere is a clear logic as to why each component is added, this is shown through the experimentation and the results. Especially the need for adding coalition matching on top of the theory of mind.\\n\\nThere is a clear increase in Pass@1 the iterative programming environment with this model.\\n\\nThere is a clear increase in accuracy in the logic and reasoning problems compared to existing methods.\", \"weaknesses\": \"The calculation of the semantic similarity of beliefs and actions is left to the LLM, this does not lend itself to a general approach as the title alludes to. It is made clear throughout the paper that this is applied to LLMs however and I do not see this as a big weakness, but would like to see this made clear in the title if possible.\\n\\nIn the debate environment the baselines where both affirmative and negative lead to a bias of the affirmative winning 65.45% of the time, that is they are both using the same method. This is a cause for concern that this result may not be robust enough and might simply be taking advantage of this bias, is it possible to show the results the other way around? (With your model placed in the negative.)\", \"minor\": \"The acronym FTM (Fraction of trust members/Frequency of team matching) is used multiple times making some sections difficult to understand.\", \"questions\": \"1)\\tFor a non-LLM environment, how will the matching scores be calculated?\\n2)\\tIn the debate environment the baselines where both affirmative and negative lead to a bias of the affirmative winning 65.45% of the time, that is they are both using the same method. This is a cause for concern that this result may not be robust enough and might simply be taking advantage of this bias, is it possible to show the results the other way around? (With your model placed in the negative.)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Re. Follow-up questions R1 & R2 & R5\", \"comment\": \"We sincerely thank the reviewer's quick feedback. We will address the follow-up questions as follows:\\n\\n> R1: Since\\u00a0b\\u2212ik\\u22121\\u00a0is the hidden mental states of other agents, how could agent i get access to it during ToM inference\\u00a0ToMik?\\n \\n**RR1**: We apologize for the confusion. As stated in line 213/214 of our original manuscript, what we want to denote $b_{-i}^{k-1}$ as *the set of $(k-1)$-level beliefs of other agents*. \\n \\nTo clarify, we revise the notation agent $i$'s beliefs about other agents as **$\\\\\\\\{b_i(a_m)\\\\\\\\}_{m \\\\neq i}$** (detailed in updated manuscript page 4 Eq.1). \\n**This is derived from agent $i$'s own observations and interactions, not direct access to others' mental states.**\\n \\n Additionally, the beliefs are stored as **memory** for each agent, used for ToM inference. \\n \\n> R2: What is the exact format of belief\\u00a0bik?. How are those different nested reasoning precesses represented?\\n \\n**RR2**: To make the exact format of beliefs and nested reasoning processes explicit, we expand our formulation to show the recursive structure for an LLM agent $i$ at cooperation round $R$ (This revision is updated in our manuscript page 4 & 5). \\n \\nWe revise its $k$-level ToM function (Eq. 1) as:\\n \\n$\\\\text{ToM}\\\\_i^k(o\\\\_i^{1:R}, \\\\hat{a}\\\\_{-i}^{1:R-1}, \\\\\\\\{b^{k-1}\\\\_{i, R}(a\\\\_m^R)\\\\\\\\}\\\\_{m \\\\neq i}) := b\\\\_{i, R}^k$\", \"where\": \"- $o_i^{1:R}$ represents agent $i$'s observation history up to round $R$, including current task state, self-actions, and collaborative teammates.\\n- $\\\\hat{a}_{-i}^{1:R-1}$ represents other agents' action history up to round $R-1$\\n- $\\\\\\\\{b^{k-1}\\\\_{i, R}(a\\\\_m^R)\\\\\\\\}\\\\_\\\\{m \\\\neq i\\\\}$ captures agent $i$'s prediction of agent $m$'s action at round $R$ based on $(k-1)$-level ToM reasoning:\\n $b^{k-1}\\\\_{i, R}(a\\\\_m^R) = p(a\\\\_m^R | \\\\text{ToM}\\\\_i^{k-1}(o\\\\_i^{1:R}, \\\\hat{a}\\\\_{-i}^{1:R-1}, \\\\\\\\{b^{k-2}\\\\_{i, R}(a\\\\_l^R)\\\\\\\\}\\\\_{l \\\\neq i}))$\\n \\nThe recursive belief structure at round $R$ is defined as:\\n \\n- *Level 0: Direct state-action beliefs:*\\n \\n $b\\\\_{i, R}^0 = \\\\text{ToM}\\\\_i^0(o\\\\_i^{1:R}, \\\\hat{a}\\\\_{-i}^{1:R-1})$ Just record the cooperation history, **without** considering any ToM reasoning\\n \\n- *Level 1: First-order beliefs*\\n \\n $b\\\\_{i, R}^1(a\\\\_j^R) = p(a\\\\_j^R|b\\\\_{i,R}^0)$ Reasoning about agent $j$'s action in current round $R$. \\n \\n- *Level k: Higher-order nested beliefs*\\n \\n $b\\\\_{i, R}^k(a\\\\_j^R) = p(a\\\\_j^R|\\\\text{ToM}\\\\_i^{k-1}(o\\\\_i^{1:R}, \\\\hat{a}\\\\_{-i}^{1:R-1}, \\\\\\\\{b^{k-2}\\\\_{i, R}(a\\\\_m^R)\\\\\\\\}\\\\_{m \\\\neq i}))$\\n\\n> R5: My concern is addressed. Thanks for the clarification. Q1 & Q2: Thanks for the clarification. I would suggest add those details to the paper.\\n \\n**RR5**. We sincerely thank the reviewer for constructive feedback. We have added additionally discussion (highlighted in purple color) for non-LLM belief-alignment calculation in Appendix A (Page 16), and case study for specialized scores in Appendix C.1 (Page 20).\"}", "{\"title\": \"Reply to Authors\", \"comment\": \"I thank the authors for their very comprehensive response, which has addressed many of my questions and concerns. Below, I include a few final questions and comments.\\n\\n> In all cases, we form one coalition including #min_coalition_size = \\u2308N / 2\\u2309, i.e., 3 agents (task 1) or 2 agents (task 2 & 3). The coalition produces the final output, whether it's code, debate arguments, or reasoning solutions.\\n\\nThis was very helpful to my understanding as previously I assumed that _multiple_ coalitions were involved in solving a given problem. The reasoning behind my interpretation is that:\\n\\n- The authors' refer to forming multiple coalitions, and their algorithm is a \\\"matching algorithm\\\", where it is implied that agents are matched together to form (multiple) coalitions.\\n- In [(cooperative) game theory](https://en.wikipedia.org/wiki/Cooperative_game_theory), coalition formation refers to the process of _partitioning_ a set of agents into (multiple) coalitions.\\n\\nThis, however, also raises several more questions and concerns for me as a reviewer.\\n\\n1. First, and this is very minor, if there is only ever one coalition that is used in solving the task, I suggest that the authors drop the phrase \\\"coalition formation\\\" and replace it with something more fitting such as \\\"team selection\\\". If nothing else they should state very clearly very early on in the paper that when they talk about coalition formation, it is about selecting a subgroup of the agents to complete the task, _not_ about partitioning the agents (all of whom then complete the task).\\n2. The algorithm for finding multiple coalitions now seems like overkill, because it sounds like it is only required to output a single coalition of a size greater than some constant `#min_coalition_size` where all the agents in the coalition have sufficiently strong belief alignment (with the option to also incorporate skill levels). Do the authors have an argument for why we can't do something simpler?\\n3. In the experimental baselines, which agents solve the task? E.g. if there is one PM and five engineers, do all five solve the task, or is some random sub-group of size `#min_coalition_size` selected to solve the task, or something else? This is also important if task skills are incorporated into coalition formation, because if this consideration is not taken into account in the baseline then it is unclear whether simply selecting the most skilled agents is actually what helps them to perform the task better.\\n\\nI also appreciated the updates to the paper regarding the algorithm for finding coalitions/teams, though I still have a few concerns. First, the authors state that when agent $i$ signals a desire to rematch (in line 9 of Algorithm 1), the algorithm:\\n\\n> - Triggers a new cooperation round and updates the preference orders based on the latest belief-action alignments\\n> - If a stable matching is found, proceed with the new coalitions\\n> - If no stable matching is found, keep the current coalitions for one more round\", \"my_questions_regarding_this_are_as_follows\": \"1. Why would the preference orders need to be updated? Why have the belief-action alignments been updated? \\n2. The authors claim that their algorithm computes a stable matching. How then, is the final step ever to be executed? Moreover, even if no stable matching is found, how would running the matching process for one more round help?\\n\\nI appreciated the point about using the $\\\\epsilon$ tolerance to reduce the search space of coalitions, but I would suggest the authors to explicitly mention what happens in this instance in the paper.\\n\\nFinally, in equation (2) the the belief operator $b^k_i$ applies to a single agent's action $a_j$ (and similarly in line 3 of Algorithm 1). In line 9 of algorithm 1, however it is applied to $a_{\\\\mu(i)}$, which presumably refers to the joint action of the agents $j \\\\in \\\\mu(i)$. Can the authors please explain what is going on here? Why not just use $B_i((\\\\mu(i))$ on line 9?\"}", "{\"metareview\": \"This paper is about theory of mind with LLM agents, and whether it helps them to form and manage coalitions of other LLM agents. Unfortunately, issues coming mainly from the game theory perspective, emerged during the review process. Two of the reviewers had extensive back and forth conversations with the authors but decided to maintain their low scores in the end.\", \"additional_comments_on_reviewer_discussion\": \"There was quite a bit of conversation on this paper with two of the reviewers being quite engaged, asking several follow up questions, which the authors replied to. However, both of these reviewers decided to maintain their low scores in the end.\"}", "{\"title\": \"Re. Follow-up questions R4\", \"comment\": \"> R4: I do not think FTM and coalition stability are valid evaluation metrics, because they are the exactly same function your algorithm aims to maximize. Using those metric to quantify \\\"cooperation trend\\\" causes circular reasoning issues. The evaluation results presented in Table 2 and Figure 2 are basically a sanity check. Your conclusion that \\\"Raw ToM capabilities alone may not improve cooperation\\\" is not supported by external metrics like Pass@1 and win rates\\n \\n**RR4**. We sincerely thank the reviewer for the thoughtful feedback. We agree that robust evaluation is crucial for validating our claims, and we evaluate performance metrics for examining ToM\\u2019s impacts on cooperation:\\n \\n- **In our original manuscript**, we have examined how \\\"Raw ToM capabilities alone may not improve cooperation\\\" in Appendix F, including:\\n - DyLAN with ToM on coding tasks, using *important score* defined by DyLAN (Appendix F.1)\\n - ChatEval with ToM on logic problem-solving and general reasoning, using *accuracy* (Appendix F.2)\\n- **New Results**. We have now added comprehensive experiments incorporating **different ToM configurations** for Project Manager (PM) and Engineers (Eng) and track **performance metrics** (Pass@1) to validate cooperation effects (detailed in Appendix F.3 of the updated manuscript). \\n\\nThe following provides a summary of our new evaluation results (key results are presented in Table 2 & 3):\\n \\n### 1. Initial Performance Comparison\\n \\n**Table 1: Initial Pass@1 Scores (Round 1) on HumanEval and MBPP**\\n \\n| PM ToM | Eng ToM | HumanEval | MBPP |\\n| --- | --- | --- | --- |\\n| 0 | 1 | 0.87 \\u00b1 0.01 | 0.525 \\u00b1 0.01 |\\n| 0 | 2 | 0.90 \\u00b1 0.02 | 0.56 \\u00b1 0.01 |\\n| 1 | 1 | 0.90 \\u00b1 0.01 | 0.55 \\u00b1 0.02 |\\n| 1 | 2 | 0.90 \\u00b1 0.02 | 0.56 \\u00b1 0.02 |\\n| 1 | 0 | 0.93 \\u00b1 0.02 | 0.56 \\u00b1 0.01 |\\n| 2 | 0 | 0.90 \\u00b1 0.01 | 0.55 \\u00b1 0.02 |\\n \\n**Key Observation**: Similar initial performance across ToM configurations.\\n \\n### 2. Performance Degradation Without Matching\\n \\n**Table 2: Pass@1 Score Changes Without Matching (Round 1 \\u2192 Round 5)**\\n \\n| PM ToM | Eng ToM | HumanEval Change | MBPP Change |\\n| --- | --- | --- | --- |\\n| 0 | 1 | 0.87 \\u2192 0.83 (\\u21934.6%) | 0.525 \\u2192 0.46 (\\u219312.4%) |\\n| 0 | 2 | **0.90 \\u2192 0.83 (\\u21937.8%)** | **0.56 \\u2192 0.45 (\\u219319.6%)** |\\n| 1 | 1 | 0.90 \\u2192 0.87 (\\u21933.3%) | 0.55 \\u2192 0.50 (\\u21939.1%) |\\n| 1 | 2 | **0.90 \\u2192 0.85 (\\u21935.6%)** | **0.56 \\u2192 0.47 (\\u219316.1%)** |\\n| 1 | 0 | 0.93 \\u2192 0.91 (\\u21932.2%) | 0.56 \\u2192 0.52 (\\u21937.1%) |\\n| 2 | 0 | **0.90 \\u2192 0.85 (\\u21935.6%)** | **0.55 \\u2192 0.49 (\\u219310.9%)** |\\n \\n**Key Finding**: Higher ToM configurations show larger performance drops without matching, supporting our claim that raw ToM capabilities may actually hinder sustained performance.\\n \\n### 3. Recovery with Matching Mechanism\\n \\n**Table 3: Performance Recovery with Matching (Round 5)**\\n \\n| PM ToM | Eng ToM | HumanEval | MBPP |\\n| --- | --- | --- | --- |\\n| 0 | 1 | 0.86 (\\u21913.6%) | 0.46 (\\u00b10%) |\\n| 0 | 2 | **0.87 (\\u21914.8%)** | **0.47 (\\u21914.4%)** |\\n| 1 | 1 | 0.88 (\\u21911.1%) | 0.52 (\\u21914.0%) |\\n| 1 | 2 | **0.88 (\\u21913.5%)** | **0.55 (\\u219117.0%)** |\\n| 1 | 0 | 0.93 (\\u21912.2%) | 0.57 (\\u21919.6%) |\\n| 2 | 0 | **0.96 (\\u219112.9%)** | **0.60 (\\u219122.4%)** |\\n \\n**Key Result**: Our matching mechanism effectively leverages ToM capabilities, **with highest improvements in PM(ToM=2) configurations.**\\n \\n**Summary of Key Findings Using External Metrics (Pass@1):**\\n \\n1. **ToM Alone Is Not Sufficient:**\\n - Similar initial performance across ToM levels\\n - Performance degradation without matching, especially in higher ToM configurations\\n2. **Effectiveness of Matching Mechanism:**\\n - Significant recovery in Pass@1 scores with matching\\n - Highest improvements in high-ToM PM configurations (up to +12.9% HumanEval, +22.4% MBPP)\\n3. **Optimal Configuration:**\\n - PM(ToM=2) + Eng(ToM=0) with matching achieves best sustained performance\\n - Demonstrates the value of our matching mechanism in leveraging ToM capabilities effectively\"}", "{\"title\": \"Regarding multi-agent environment setting and clarification for new experiments\", \"comment\": \"> P1. The revised definition is much clearer than the original one presented in the paper. However, it seems to be limited to specific scenarios and ToM inferences, and therefore lacks generalizability. For example, the proposed ToM formula does not apply to partially observable environments where agents do not always have access to other agents' actions.\\n> \\n\\n**R.P1** The reviewer raises a valid point about partial observability. However, in our cooperative multi-agent setting:\\n\\n- **Complete action observability is a standard assumption in cooperative environments** where agents are working together toward common goals. This assumption is widely used in existing LLM-based multi-agent cooperation environment, e.g. MetaGPT and DyLAN.\\n- Addressing information asymmetry would introduce additional complexity that could obscure our main findings about ToM's impact on cooperation. While extending our framework to partial observability settings is an interesting direction for future work, it lies outside the scope of our current investigation into ToM-based coalition formation.\\n\\n\\n> P2. The newly added experimental results still do not support the claimed conclusions. Most conditions in Tables 2 and 3 are worse than those in Table 1, meaning that introducing ToM with an additional 4 rounds of interaction and the matching mechanism does not improve team performance. The only exception is the last condition in which the claimed degradation and recovery are observed. More experiments are needed to explain the divergence in performance.\\n> \\n\\n**R.P2.** Regarding the performance comparisons between Tables 1-3, we want to clarify several key points:\\n\\n1. **Purpose of ToM Integration:**\\n - Our approach is **not about adding ToM** to improve cooperation directly\\n - Rather, we provide a **\\\"plug-and-play\\\" coalition formation** mechanism (mentioned in our response to W5) that optimizes team composition **based on existing ToM capabilities**\\n - **Main goal.** The goal is to leverage ToM capabilities effectively in multi-agent cooperation **through optimal coalition formation**.\\n2. **Performance Improvements:**\", \"key_results_demonstrate_the_effectiveness_of_our_approach\": [\"PM(ToM=2) with matching achieves significantly better performance:\", \"HumanEval: 0.90 \\u2192 0.96 (+6.7%)\", \"MBPP: 0.55 \\u2192 0.60 (+9.1%)\", \"For other settings, our matching demonstrates effective recovery from performance degradation (specially for high ToM).\", \"The reviewer notes performance variations across conditions. This actually supports our key insight:\", \"Higher ToM (cognitive) capabilities alone don't guarantee better cooperation performance. This challenges the common assumption that more sophisticated cognitive abilities automatically lead to better teamwork\", \"Our matching mechanism helps optimize team cooperation.\"]}", "{\"title\": \"Close to the end of rebuttal period--December 2nd\", \"comment\": \"We sincerely appreciate your time and effects during the review and rebuttal period.\\n\\nAs we approach the end of the rebuttal period on December 2nd, we wanted to follow up our previous discussion. We hope our response has adequately addressed your concerns. If any points require further clarification, we would be grateful for your feedback. Thank you again for your consideration throughout this process.\"}", "{\"title\": \"Follow-up questions\", \"comment\": \"Hi authors, thanks for your response. I am still confused by the following questions and would like to hear your input.\", \"r1\": \"Since $b_{-i}^{k-1}$ is the hidden mental states of other agents, how could agent i get access to it during ToM inference $\\\\text{ToM}_i^k$?\", \"r2\": \"What is the exact format of belief $b_{i}^{k}$?. How are those different nested reasoning precesses represented?\", \"r4\": \"I do not think FTM and coalition stability are valid evaluation metrics, because they are the exactly same function your algorithem aims to maximize. Using those metric to quantify \\\"cooperation trend\\\" causes circular reasoning issues. The evaluation results presented in Table 2 and Figure 2 are basiclly a sanity check. Your conclusion that \\\"Raw ToM capabilities alone may not improve cooperation\\\" is not supported by external matrics like Pass@1 and win rates.\", \"r5\": \"My concern is addressed. Thanks for the clarification.\\n\\nQ1 & Q2: Thanks for the clarification. I would suggest add those details to the paper.\"}", "{\"comment\": \"We thank reviewer zFGd's recognition of our paper's clear motivation, novelty and promising results.\", \"we_address_specific_concerns_below_point_by_point\": \"> W1. Whilst the authors do mention that the coalition formation is generally an **NP-hard problem**, they do not offer any ideas about potential future possibilities that would help with the scalability of the framework\\n\\n \\n We thank the reviewer for raising this important point about scalability. We propose to add a dedicated discussion of scalability solutions in our future work section (section 7). \\n\\n- For example, we can employ **preference list truncation** where agents only maintain preferences for **a bounded number of potential partners**, reducing the complexity from O(n\\u00b2) to O(kn) where k is a fixed constant much smaller than n.\\n- Additionally, for future work, we can implement a **hierarchical matching** approach where agents are first grouped into clusters based on their ToM levels and task requirements, and then matching is performed within these **smaller clusters**. This would reduce the search space.\\n\\n> W2. I do not understand the prompt referenced in Appendix A and the corresponding LLM output. The belief model is rather vague, and when looking at the output of the alignment scores it seems a bit arbitrary - e.g. the belief model does not mention using an object oriented approach, but in the alignment score this seems to be highly valued? I am just slightly concerned that some of the alignment scores outputted by the LLMs are not particularly strong signals and ideally it would be measured using something more robust.\\n> Overall, my main concern is the potential scalability of the proposed framework, with firstly the coalition forming being difficult and secondly the requirement to generate beliefs over all other agents. Furthermore, whilst the empirical results are good and I am not downplaying them, I am not convinced the proposed settings are those that can really leverage ToM fully. However, this is not impacting my score.\\n \\n \\nWe acknowledge the reviewer's concern about the robustness of alignment scores and propose to clarify:\\n\\n- **Alignment Score Measurement:**\\nFor LLM-based agents, our approach leverages one of the key advantages of LLMs: their ability to handle **open-ended trajectories** and perform nuanced semantic comparisons. The self-evaluation method allows LLMs to assess alignment in complex, unstructured action spaces where traditional similarity metrics might fall short.\\n- **Alternative Approaches:**\\nFor non-LLM environments with **well-defined action spaces**, traditional metrics (e.g., Euclidean distance) could be used instead. We acknowledge this as an important direction for future work and will discuss alternative measurement approaches in our revised paper.\\n- We want to emphasize that our core contribution is the **coalition formation** mechanism as a **plug-and-play** approach for improving multi-agent cooperation with ToM agents.\\n\\n> Q1. For the insight that low ToM exhibits better cooperation compared to high ToM, I wonder how specific this is to the environment being looked at. For example, the multi-agent programming setting, at least to me, does not strike me as an environment that requires much ToM to successfully cooperate in, therefore low ToM being more successful may simply be due to the lower complexity of using it. Have the authors noticed this same trend in other environments?\", \"our_evaluation_utilized_different_settings\": [\"**Programming Tasks**: Despite being relatively *structured*, we observed higher ToM agents exhibiting overthinking and reduced cooperation.\", \"**Debate Setting** (Section 6.4): Even in this highly *social* environment, we found higher ToM agents sometimes overthinking and showing less cooperative behavior initially.\", \"We agree this opens interesting directions for future research on how environmental complexity interacts with ToM levels.\"]}", "{\"title\": \"Regarding High ToM formulation\", \"comment\": \"> P3. Similarly, the higher-order ToM definition is limited to agent i's prediction of agent j's action (in some sense, inferring the intention), while ignoring other types of mental state inferences (e.g., belief, desire). For example, the current 2nd-order inference would be \\\"how i thinks about j's action, given i's 1st-order inference of j's action.\\\" Other 2nd-order nested ToM reasoning processes commonly discussed in the literature, such as \\\"how i thinks j is thinking about i's action,\\\" cannot be represented in the proposed definition.\\n> \\n\\n**R.P3**\", \"let_us_explain_using_a_chess_game_example_to_show_our_formulation_aligns_with_real_world_tom_reasoning\": [\"In chess, when player i predicts j's next move $a_j^R$, **i cannot directly access j's thoughts** **or beliefs**, but can **only form beliefs based on observed game history** $\\\\hat{a}_{-i}^{1:R-1}$ (including i's own past moves) and j's responses.\", \"For example, if i previously moved their knight aggressively and j responded defensively, i forms beliefs about how j interprets i's aggressive style - but this is **i's interpretation of j's thinking, not j's actual beliefs.**\", \"Our formulation $b_{i,R}^1(a_j^R)$ represents i thinking \\\"Based on how j has reacted to my past moves, **I believe j thinks I play aggressively**, so j will likely make a defensive move\\\" - this is entirely based on i's observations and beliefs.\", \"Even when i thinks \\\"j expects me to move my bishop,\\\" this is **i's belief about j's expectation**, not j's actual expectation - **j might have a completely different understanding of i's strategy**.\", \"Therefore, $b_{i,R}^1(a_j^R)$ captures **i's belief structure about j's thinking regarding i's actions**, constructed entirely from i's perspective **without access to j's true beliefs.**\", \"Let us break down how our formulation at Level 2 ToM ($b_{i,R}^2(a_j^R)$) captures \\\"how i thinks j is thinking about i's action\\\" using the chess game example:\", \"### Level 2 ToM Expression\", \"$b\\\\_{i,R}^2(a\\\\_j^R) = p(a\\\\_j^R|\\\\\\\\text{ToM}\\\\_i^1(o\\\\_i^{1:R}, \\\\hat{a}\\\\_{-i}^{1:R-1}, \\\\\\\\{b^0\\\\_{i,R}(a_m^R)\\\\\\\\}\\\\_{m \\\\neq i}))$\", \"### Step-by-Step Reasoning Process:\", \"1. **Base Knowledge (Level 0)**\", \"i observes the current game state $o_i^{1:R}$\", \"i knows the history of moves $\\\\hat{a}_{-i}^{1:R-1}$ including i's own past moves\", \"This forms i's base beliefs $b^0_{i,R}(a_m^R)$\", \"2. **First Layer of Reasoning (Level 1)**\", \"i thinks: \\\"Given my previous aggressive knight moves...\\\"\", \"i forms beliefs about j's thinking: \\\"j has seen my aggressive style\\\"\", \"This is captured in $\\\\\\\\text{ToM}\\\\_i^1$ *which uses history $\\\\\\\\hat{a}\\\\_{-i}^{1:R-1}$*\", \"3. **Second Layer of Reasoning (Level 2)**\", \"i thinks: \\\"j probably thinks I will make another aggressive move\\\"\", \"i reasons: \\\"j is likely preparing a defensive response because they think I'll be aggressive\\\"\", \"This nested thinking is captured in $b\\\\_{i,R}^2(a\\\\_j^R)$\", \"4. **Prediction Formation**\", \"Based on this recursive reasoning, i predicts j's action $a\\\\_j^R$\", \"Example: \\\"j will likely position their bishop defensively because they think I'm planning another aggressive knight move\\\"\"]}", "{\"title\": \"Re. C. Missing Experimental Details\", \"comment\": \"We thank the reviewer's comments, which have helped us clarify several important aspects of our experimental settings.\\n\\n> C-W1. I assume the ToM level for debating agents arguing for the negative side is 0, but it would be good to clarify this.\\n> \\n\\n**R.C-W1.**\\n\\nYes. As stated in our original manuscript \\u201caffirmative team with ToM against a team without ToM\\u201d. Thus, ToM level for the negative side is 0 in our evaluation. \\n\\nAdditionally, we conducted symmetric experiments by placing ToM agents on the negative side while keeping the affirmative side without ToM capabilities. \\n\\n- As shown in the table, the performance of our method with ToM and Matching showed a win rate of **36.36%.**\\n- This result mirrors our previous findings where ToM agents were on the affirmative side, demonstrating that the effectiveness of our coalition formation mechanism is consistent regardless of debate sides.\\n\\n| Setting | Win Rate |\\n| --- | --- |\\n| No-ToM | 34.55% (65.45% Win Rate for the affirmative side) |\\n| ToM w.o. Matching | 25.45% |\\n| ToM w. Matching (Ours) | **36.36%** |\\n\\n> Minor: The acronym FTM (Fraction of trust members/Frequency of team matching) is used multiple times making some sections difficult to understand.\\n> \\n\\nWe have update the manuscript using *Fraction of trusted members* only.\\n\\n> C-W2. Once coalitions are formed, how do the prompts/instructions given to the agents in different coalitions actually change?\\n> \\n\\n**R.C-W2.** To clarify, the base prompts/instructions for agents do not change largely after coalition formation. The Base Prompt Structure contains: \\n\\n1. Agent role definition; \\n\\n2. Task description; \\n\\n3. **List of current teammates (this is the only part that updates with coalition formation)**\\n\\n> C-W3. How many agents are actually present in the various settings, and what are the sizes of the coalitions that are formed?\\n> \\n\\n\\n**R.C-W3.** Here are the specific agent numbers and coalition details for each task:\\n\\n1. **Iterative Programming Task**: Total agents of 5, including one Project Manager + four Engineers. The setting is the same as the motivating example (figure 1), as described in line 120.\\n2. **Debate Task**: Total agents of 6 (three agents on each side, as detailed in lines 484-485, 493-495)\\n3. **Logical and General Reasoning Task**: Total agents of 3, with different ToM levels: 0, 1, 2\\n\\nIn all cases, we form one coalition including #min_coalition_size = $\\\\lceil N/2 \\\\rceil$, i.e., 3 agents (task 1) or 2 agents (task 2 & 3). The coalition produces the final output, whether it's code, debate arguments, or reasoning solutions. \\n\\nWe will make these details more explicit in the paper by adding a dedicated section describing the experimental setup and coalition formation process for each task type.\"}" ] }
0G6rRLYcxm
Maximum Next-State Entropy for Efficient Reinforcement Learning
[ "Dianyu Zhong", "Yiqin Yang", "Ziyou Zhang", "Yuhua Jiang", "Bo XU", "Qianchuan Zhao" ]
Maximum entropy algorithms have demonstrated significant progress in Reinforcement Learning~(RL), which offers an additional guidance in the form of entropy, particularly beneficial in tasks with sparse rewards. Nevertheless, current approaches grounded in policy entropy encourage the agent to explore diverse actions, yet they do not directly help agent explore diverse states. In this study, we theoretically reveal the challenge for optimizing the next-state entropy of agent. To address this limitation, we introduce Maximum Next-State Entropy (MNSE), a novel method which maximizes next-state entropy through an action mapping layer following the inner policy. We provide a theoretical analysis demonstrating that MNSE can maximize next-state entropy by optimizing the action entropy of the inner policy. We conduct extensive experiments on various continuous control tasks and show that MNSE can significantly improve the exploration capability of RL algorithms.
[ "Deep Reinforcement Learning; Maximum Entropy Reinforcement Learning" ]
https://openreview.net/pdf?id=0G6rRLYcxm
https://openreview.net/forum?id=0G6rRLYcxm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "o0qyZLkCpi", "nP6sP27rp6", "n0Ws9qIeuN", "mwnQBVXjTp", "kDP0i0bOVy", "j5Rha12mvG", "cAZzRd8J36", "bI41CXSORr", "ZvsILyDLjf", "Y9VDDsMMlL", "UmiPmersPX", "TL6EzHYoPZ", "SeRpq0rqJy", "QyStgffNKz", "QLXy70uXfe", "P2rGBh51Tk", "OUxaODC8PD", "KtNF9Uk3h1", "JFQ77Xw5Pn", "GUd6SyTNFK", "AxIIhno8nA", "7bYMfVwkTl", "3OUqcq9S9H", "2fqDaq4QRl" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730785118500, 1730446775574, 1732506618596, 1732295162256, 1732295729755, 1730746533049, 1732716846373, 1732512983941, 1733715418979, 1732295272832, 1732481969513, 1733124662134, 1732295426413, 1732716879323, 1733124595925, 1732628943240, 1732529839444, 1732295604408, 1730089987352, 1732514456977, 1732529801788, 1732295646998, 1732529865225, 1733124624034 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5644/Reviewer_S96H" ], [ "ICLR.cc/2025/Conference/Submission5644/Reviewer_pNrB" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Reviewer_ZUw7" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Reviewer_ybLa" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Reviewer_ybLa" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Reviewer_ZUw7" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Reviewer_ybLa" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ], [ "ICLR.cc/2025/Conference/Submission5644/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This article presents a new reinforcement learning method called Maximum Next State Entropy (MNSE) which optimizes next-state entropy through a reversible action mapping layer. MNSE shows better performance than existing methods in complex environments with nonlinear actuators and emphasizes the importance of appropriate model and parameter settings.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The experiments cover multiple continuous control tasks, including complex environments like robotic arm control. The results show that MNSE outperforms traditional maximum entropy reinforcement learning methods and other reward-based exploration strategies in these tasks. This indicates the significant potential of the MNSE method in practical applications.\\n\\n\\n2. The paper provides rigorous theoretical analysis and demonstrates its effectiveness, which is significant for advancing research and development in the field of reinforcement learning.\\n\\n\\n3. The paper is written in a clear and concise manner. This helps readers better understand and grasp the core ideas and technical features of the method.\", \"weaknesses\": \"Given that MNSE relies on the accurate estimation of the dynamic model, how do you ensure the accuracy of these estimations and avoid overfitting?\\n\\nAdditionally, could you provide guidance on how to reasonably select the hyper-parameters to optimize the algorithm's performance?\", \"questions\": \"See questions in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a thorough theoretical analysis of the relationship between maximizing next-state entropy and policy entropy. The authors propose a novel framework that links these two types of entropy through an innovative approach that utilizes an inner policy and an action mapping function. Based on this theoretical foundation, the authors introduce the Next-State Entropy Maximization algorithm (MNSE), which is shown to be particularly effective in environments with redundant action spaces. This work contributes valuable insights into entropy maximization, bridging next-state novelty concepts with policy design in reinforcement learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper offers a fresh perspective on state novelty by theoretically linking next-state entropy and policy entropy. While state novelty algorithms are known to enhance agent performance in various environments, the theoretical analysis of state entropy remains underexplored. This paper addresses this gap by establishing a detailed connection between next-state entropy and policy entropy, achieved through an internal policy and an action mapping function.\\n\\n2. The authors provide a rigorous and structured proof process, making it easy for readers to follow the logical progression and understand the interplay between the entropies. This systematic approach gives a solid foundation for the proposed MNSE framework, and the clarity of the theoretical contributions makes the complex subject matter more approachable for readers.\\n\\n3. The MNSE algorithm is an impressive practical outcome of this research, showcasing strong empirical results in environments with redundant action spaces. This suggests that the algorithm could be beneficial for a wide variety of applications where action redundancy exists, offering new avenues for exploration in reinforcement learning.\", \"weaknesses\": \"1. The paper includes performance comparisons at 20% and 40% EAP, as seen in Experiment 2. However, expanding these comparisons to include higher EAP levels, such as 80%, 60%, and 100%, would be beneficial. Analyzing performance across a broader range of EAP settings could offer a more comprehensive view of the algorithm\\u2019s robustness and adaptability to different entropy thresholds.\\n\\n2. The current experiments effectively demonstrate MNSE\\u2019s performance in Mujoco and Meta-World environments. However, adding further experiments focused on pure exploration tasks\\u2014such as those found in maze environments or other exploration-heavy scenarios\\u2014would be valuable. Such experiments could provide deeper insights into how MNSE's maximization of next-state entropy impacts exploration behavior, highlighting its effectiveness in environments where exploration quality is critical.\\n\\n3. While MNSE is compared with well-established state novelty and exploration algorithms, such as MinRed and NovelID (both published in 2021), comparisons with more recent approaches (from 2022 or 2023) could further strengthen the relevance and appeal of this work. Including newer algorithms in the comparative analysis could provide a more current context for MNSE\\u2019s performance and underscore its competitiveness among recent advancements in state novelty and exploration research.\", \"questions\": \"1. In the Related Works section, the authors reference several algorithms (such as SAC, DSPG, CBSQL, and max-min entropy frameworks). Could the authors elaborate on why these algorithms were not included in the experiment section? Understanding the selection criteria for comparison could provide further clarity on the position of MNSE within the broader landscape of state and policy entropy methods.\\n\\n2. SAC is used as the baseline for updating the policy entropy term in the MNSE framework. It would be interesting to learn how MNSE might perform if alternative algorithms, such as DSPG or CBSQL, were used instead. Could the authors discuss potential outcomes or the theoretical basis for choosing SAC over these other algorithms? Insights into how the baseline choice affects MNSE\\u2019s performance would be helpful for researchers considering alternative implementations of the framework.\\n\\n3. The entropy of all visited states has long been an interesting topic, though challenges remain in addressing it within a solid theoretical framework. Could the authors discuss the correlation between next-state entropy and the total entropy of visited states? This could provide further insight into MNSE\\u2019s implications for overall state entropy in agent behavior.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ybLa\", \"comment\": \"Dear Reviewer,\\n\\nThank you for the reviewer\\u2019s valuable suggestion! We have added the assumption $\\\\sigma>1$ in the toy example (Section 4.1, Line 184). Based on this assumption, we have provided theoretical analysis and numerical results in Appendix A (Line 734) to demonstrate that the \\n$\\\\mu$ variable is positively related to the entropy.\\n\\nWe sincerely appreciate your feedback, which has helped make our paper more rigorous.\\n\\nBest,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer S96H\", \"comment\": \"Dear Reviewer,\\n\\nThanks for finding our work is well written and significant for advancing research.\\nThe points you raised are explained in the following.\\n\\n**W1: How do you ensure the accuracy of the dynamic model estimations in MNSE and avoid overfitting?**\\n\\n**A for W1:**\", \"we_ensure_the_accuracy_of_the_dynamic_model_estimations_in_the_following_ways\": \"- **Choice of distribution:** \\n We adopt discrete multinomial distributions rather than Gaussian distributions for the inverse dynamic model. This choice better captures the dynamic characteristics of real-world physical systems. \\n\\n- **Iterative updates:** \\n The inverse dynamic model is continuously updated throughout the whole training process. This iterative refinement ensures that the model adapts and maintains accuracy as training progresses. \\n\\nTo avoid overfitting, we employ the following strategies: \\n\\n- **Continuously collected data:** \\n During the iterative training of the dynamic model, the replay buffer $D$ continuously incorporates new samples collected by the policy. This ongoing data augmentation enhances diversity and improves the model's generalization ability, effectively preventing overfitting. \\n\\n- **Regularization:** \\n We include a weight decay term in the Adam optimizer to constrain the L2-norm of the model parameters, further mitigating overfitting risks. \\n\\n\\n**W2: Could you provide guidance on how to reasonably select the hyperparameters to optimize the algorithm's performance?**\\n\\n**A for W2:** \\n\\n- **Selection of hyperparameters in SAC backbone:**\\nOur algorithm, MNSE, is developed based on the SAC algorithm from the RL Baselines3 Zoo. All hyperparameters (e.g., learning rate, buffer size) are consistent with the SAC defaults in RL Baselines3 Zoo, as these have already been optimized. This ensures a fair comparison with baseline methods.\\n\\n- **Selection of unique parameters in MNSE:**\\nFor our algorithm, a key hyperparameter is $N$, the number of parameters in the piecewise linear function. As illustrated in Figure 5, the algorithm's performance improves as $N$ increases. However, beyond $N \\\\geq 20$, the performance stabilizes. For control tasks similar to Mujoco or Metaworld, we recommend setting $N=20$.\"}", "{\"title\": \"Response to Reviewer ybLa\", \"comment\": \"Dear Reviewer,\\n\\nThanks for your valuable comments.\\nWe hope the following statement can address your concern.\\n\\n**W1: Further clarification of example on aging equipment in introduction.**\\n\\n**A for W1:** \\nYes, we have revised this section in the introduction to provide further clarification. \\nIn our setup, before learning the policy, the equipment already has saturation or deadzone effects (caused by aging equipment or design redundancy). During policy learning and deployment, the action space remains unchanged.\\n\\nOur work focuses on policy learning when there is redundancy in the action space, rather than on adapting the policy when the action space itself changes.\\n\\n\\n**W2: Abuse of notation for reward function.**\\n\\n**A for W2:** \\nYes, we have modified the notation to ensure consistency throughout the paper.\\n\\n\\n**Q1: There are some differences between MNSE and SAC when effective action proportion is 100\\\\%**\\n\\n**A for Q1:** \\nYes, there are differences in performance between MNSE and SAC when the effective action proportion (EAP) is 100\\\\%, and they are not exactly equal. There are two reasons for this:\\n\\n* Even when the actuator's torque and the input commands are perfectly linear, the change in the state is not necessarily linear. MNSE is designed to eliminate such redundancy, leading to better performance in tasks like HalfCheetah.\\n* At EAP=100\\\\%, due to model learning and gradient training, the action mapping layer learned by MNSE may have a slight discrepancy from the true identity function (where actions = inner actions), which results in small performance differences, as observed in tasks like Ant.\\n\\n**Q2: How can we guarantee the mu variable is positive related with the entropy?**\\n\\n**A for Q2:** \\nProviding a rigorous mathematical analysis is challenging.\\nHowever, we have included numerical results for different sigma values, as shown in Figure. 6 in the Appendix. A.\\nThe results indicate that next-state entropy increases with higher mu values, demonstrating a positive correlation between mu and entropy.\\n\\n**Q3: Will maximum next-state entropy benefit the policy learning in discrete action space environments?**\\n\\n**A for Q3:** \\nAs discussed in Section 4,\\nin discrete action spaces, the fewer redundant actions there are, the smaller the gap between next-state entropy and policy entropy.\\nThis insight suggests that evaluating actions and removing those with identical effects within the discrete action set can help improve exploration efficiency.\\n\\n**Q4: Can you explain in more details on how to derive the content of equation (8)**\\n\\n**A for Q4:**\\nThe content of equation (8) is derived using the \\\\textit{Change of Variable Theorem}. \\n\\nThe Change of Variable Theorem states that if you have a random variable $ X $ with a probability density function $p_X(x)$, and you apply a transformation $ y = g(x) $, then the probability density of the new variable $ Y $ can be obtained by adjusting the original density using the Jacobian of the transformation:\\n\\n$$\\np\\\\_Y(y) = p\\\\_X(g^{-1}(y)) \\\\left| \\\\frac{d}{dy} g^{-1}(y) \\\\right|\\n$$\\n\\n\\n**Q5: Why the parameters of the inverse dynamic of inner policy are optimized first, instead of the ones of the mapping layer?**\\n\\n**A for Q5:**\\nThis is a technical decision related to the initialization process. At the beginning, the inverse dynamics network is randomly initialized, while the mapping layer is initialized as an identity function (i.e., actions = inner actions). Optimizing the inverse dynamics network first helps avoid the negative impact of the random initialization of the inverse dynamics network on the updates to the mapping layer. By optimizing the inverse dynamics network first, we ensure that the mapping layer can later benefit from a more stable inverse dynamics model, leading to more effective and efficient learning.\"}", "{\"summary\": \"The authors propose a new maximum entropy reinforcement learning algorithm where the entropy of the next state is enforced while learning the policy. First a particular policy parameterization is used. Inner actions are first sampled according to a parameterized inner policy (i.e., a parameterized distribution from states to features, called inner actions) and the actions are transformations of these inner actions (piecewise linear in practice, such that the density of actions can be computed based on the density of inner actions using the change of variable theorem). Second, the entropy of next states is decomposed as the sum of: the entropy of the inner policy, the expected probability of the inner actions knowing the state transitions (i.e., knowing the current state and the future state), and a constant term. Then the inner policy is maximized using SAC (applying the outer actions in the MDP). The piecewise-linear transformation is computed to maximize the expectation of the probability of inner actions knowing the state transitions. The probability of (inner) actions knowing the state transition is learned by maximum likelihood estimation. This approach eventually leads to better control policies compared to algorithms that only accounts for the entropy of actions.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The problem at hand is very important to the RL community.\\n2. The approach is novel, the authors introduce a new promising intrinsic reward bonus.\", \"weaknesses\": \"1. Some points are unclear and have raised questions, some of which may be critical to the paper. See questions bellow.\\n2. The authors have missed a large part of the literature that is active in maximizing the entropy of states visited by a policy along trajectories [1, 2, 3, 4, 5]. The latter can be performed with simpler algorithms compared to the one proposed in the paper. In practice those algorithms allow to have a good state coverage, which is the objective pursued by the authors. They should be added in the related works, discussions and experiments.\\n3. There are errors (or shortcuts) in some equations and notations that make the paper hard to follow and prevent ensuring the correctness of all mathematical developments. Here are those I noticed:\\n\\na. In section 3, the reward function is sometimes a function of the action, sometimes not.\\n\\nb. In equation (2), the distribution $P^\\\\pi$ is undefined.\\n\\nc. In section 4.1, how are $p(x)$ and $\\\\pi$ related? (There is also a clash of notation between the constant $\\\\pi$ and the policy $\\\\pi$)\\n\\nd. The inverse dynamic of inner policy is not defined in the main body.\\n\\ne. In equation (9), I suppose an expectation is missing over the random variable $s$ in the gap term.\\n\\nf. In equation (10) and (12), the variable $s$ is again undefined in the optimization problem. Is it on expectation or for all $s$, how is it done in practice?\\n\\ng. Same problem in equation (13), where a function independent of $s$ equals a function of $s$.\\n\\nh. In section 5.3, is the $x$-variable in the equation the inner action $e$?\\n\\ni. In many equations $e$ appear as a variable, but should be replaced by $f^{-1}(a, \\\\theta)$ as the expectations are over $a$.\\n\\nj. There are thee notations for parametric functions that are used together. For example, we have $f(e, \\\\theta)$, $f^\\\\theta$ and $f_\\\\theta$.\\n\\n4. Section 3 focusses on defining conditions under which the action entropy equals the state entropy. The latter is done based on non-redundant actions and non-redundant policies. From my understanding, the inner policy is not non-redundant, and there are no guarantee that the (outer) policy is eventually non-redundant after optimization. While it can be argued that the discussion is in itself interesting, I think it is confusing to introduce at the very beginning of the paper something that is unused afterwards.\\n5. There is a methodological error in the experiment. The entropy of the next state is never shown, there is thus no evidence that the method learns high entropy policies.\\n\\n[1] Lee, L., Eysenbach, B., Parisotto, E., Xing, E., Levine, S., & Salakhutdinov, R. (2019). Efficient exploration via state marginal matching. arXiv preprint arXiv:1906.05274.\\n\\n[2] Guo, Z. D., Azar, M. G., Saade, A., Thakoor, S., Piot, B., Pires, B. A., ... & Munos, R. (2021). Geometric entropic exploration. arXiv preprint arXiv:2101.02055.\\n\\n[3] Islam, R., Ahmed, Z., & Precup, D. (2019). Marginalized state distribution entropy regularization in policy optimization. arXiv preprint arXiv:1912.05128.\\n\\n[4] Hazan, E., Kakade, S., Singh, K., & Van Soest, A. (2019, May). Provably efficient maximum entropy exploration. In International Conference on Machine Learning (pp. 2681-2691). PMLR.\\n\\n[5] Liu, H., & Abbeel, P. (2021). Behavior from the void: Unsupervised active pre-training. Advances in Neural Information Processing Systems, 34, 18459-18473.\", \"questions\": \"1. What is the advantage of using the function $f$ to increase the gap (and thus control the next state entropy), compared to simply use as intrinsic reward the log likelihood of the inverse dynamics model (and choose f as an identity function, such that : actions = inner actions) in SAC? Similarily, why not simply learning a forward model of the MDP, and using the log likelihood of that model as intrinsic reward, to enforce the entropy of next states?\\n2. Could the authors clarify the different parametric functions at hand? What is the advantage of the custom transformation in section 5.3 instead of a normalizing flow?\\n3. A discretized multinomial distribution is used for the inverse dynamics model. What is the justification for that instead of a normalizing flow (or auto-encoder + ELBO for learning) and how is it limiting in practice?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ZUw7 (1/2)\", \"comment\": \"Dear Reviewer,\\n\\nThanks for your reply! We will address your follow-up questions below.\\n\\n**Q1.1: Misinterpretation of exploration with marginal state entropy:**\\n\\n**A for Q1.1:** Thank you for your valuable feedback. We have revised our statement in the related work section in the revised paper to better reflect the literature. The corrected description is as follows:\\n\\nState Entropy Maximization aims to learn a reward-free policy in which state visitations are uniformly distributed across the state space, thus promoting robust policy initialization and efficient adaptation. \\n**Additionally, when task rewards are available, incorporating state entropy as an intrinsic reward has proven to be an effective approach for enhancing exploration.**\\n\\n**Q1.2: Pursuing the state entropy maximization objective would strengthen the experimental setting.**\\n\\n**A for Q1.2:**\\nAs suggested, we conducted additional experiments on MuJoCo tasks with input nonlinearity, comparing our method with APT [1] and SMM [2].\\nThe hyperparameters are consistent with the defaults in URLB [9].\\nAs shown in the table below, our method outperforms the baseline approaches.\\n\\nThe key advantage of our method lies in the introduction of an action mapping layer following the inner action, which directly shapes the action space. In contrast to APT and SMM, which rely on additional intrinsic rewards to train the agent, our approach is more direct and effective.\\n\\n|Method| Ant | HalfCheetah | Hopper | Walker2d |\\n|--------------------|--------------------|--------------------|--------------------|--------------------|\\n| APT[1] | 1985.99\\u00b1655.18 | 7199.61\\u00b11784.04 | 2559.85\\u00b1559.68 | 2584.04\\u00b1739.52 |\\n| SMM[2] | 1550.26\\u00b1419.62 | 7275.25\\u00b11477.57 | 1920.11\\u00b1370.66 | 2927.85\\u00b1975.06 | \\n| MNSE (Ours) | **3586.25\\u00b1681.38** | **7824.08\\u00b11190.98** | **3013.21\\u00b1149.09** | **4240.30\\u00b1484.45** |\\n\\nTable 3. Comparisons with additional baselines in MuJoCo tasks with input nonlinearity.\\n\\n**Q2: Errors in some equations and notations:**\\n\\n**A for Q2:** \\nThanks for your detailed and valuable suggestions.\", \"we_have_corrected_the_errors_in_the_equations_and_provided_clarifications_to_address_any_areas_of_potential_confusion\": \"* a: We have made clearer annotations in equation (9), replacing $s$ with $s\\\\_t$ and $s^{\\\\prime}$ with $s\\\\_{t+1}$, and we have labeled $e = f^{-1}(a\\\\_t; \\\\theta)$.\\n\\n* b: In equation (10), we have updated the expectation notation to $\\\\underset{\\\\substack{(s\\\\_t, a\\\\_t, s\\\\_{t+1}) \\\\sim \\\\pi\\\\\\\\ s\\\\_{t+1} \\\\sim P(\\\\cdot \\\\mid s\\\\_t, a\\\\_t)}}{\\\\mathbb{E}}$ for consistency and to avoid confusion.\\n\\n* c: In equation (12), we explicitly define the relationship between $a$ and $e$ as $a = f(e; \\\\theta)$.\\n\\n* d: In equation (11), to ensure consistency, we have changed the sum symbol to the expectation symbol.\\n\\n**Q3: Include a proper discussion and illustration of the next state entropy.**\\n\\n**A for Q3:**\\nThank you for your suggestion. We have added a \\\"Discussion\\\" section (line 517) in the revised paper to elaborate on why maximizing next-state entropy is important in reinforcement learning. The content is as follows:\\n\\nWhy Maximize Next-State Entropy in Reinforcement Learning?\\nEntropy regularization is a fundamental technique in reinforcement learning.\\nBy integrating an entropy maximization term,\\nit enhances robustness to model and estimation errors [3], \\npromotes the acquisition of diverse behaviors [4], \\nfacilitates broader exploration [5,6,7]\\nand accelerates the learning process by smoothing the optimization landscape [8].\\n\\nHowever, maximizing policy entropy may not directly promote policy optimization due to redundancy in the action space. In such cases, next-state entropy extends the concept of policy entropy more directly. Specifically, next-state entropy measures the entropy of the next state resulting from the policy, rather than the action itself. This shift allows next-state entropy to capture the diversity of effects induced by actions. By bridging the gap between next-state and policy entropy, our method retains the benefits of policy entropy while addressing inefficiencies caused by action redundancy.\\n\\n**Q4: Claim the approach is mode-free.**\\n\\n**A for Q4:**\\nThanks for your suggestions and we have removed this point in our response.\\n\\nBest,\\n\\nThe Authors\"}", "{\"comment\": \"Thank you for your efforts! I\\u2019ve raised my score accordingly.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to Reviewer ZUw7\", \"comment\": \"Dear Reviewer,\\n\\nThanks for your valuable comments.\\nWe hope the following statement can address your concern. \\n\\n**W2: Discussion with prior works focused on maximizing the entropy of all visited states**\\n\\n**A for W2:** \\nWe will add discussions of these works in the related works section and explicitly clarify the differences in objectives and methodologies between these studies and our approach. \\n\\nHere are the **key differences** between our method (maximizing next-state entropy) and the state entropy maximizing methods mentioned:\\n\\nThe works mentioned in [1, 2, 3, 4, 5] directly construct policies (which can even be deterministic) to achieve better state coverage.\\nHowever, they do not consider the entropy of policies. \\n\\nIn contrast, our approach builds on the entropy of stochastic policies, which not only accelerates learning and prevents premature convergence to suboptimal solutions but also induces a smoother objective that connects solutions and enable the use of larger learning rates[6].\\n\\nNext-state entropy is an extension of policy entropy in environments with redundant action spaces.\\nPolicy entropy encourages action diversity while next-state entropy extends this idea to measure the diversity of effects caused by actions. \\nOur method inherits the benefits of policy entropy while overcoming inefficiencies caused by action redundancy.\\n\\n**W3: Errors in some equations and notations:**\\n\\n**A for W3:**\", \"we_have_corrected_the_errors_in_the_equations_and_notations_and_provided_clarifications_to_address_any_areas_of_potential_confusion\": \"* a: \\nThanks for pointing out this, we have modified the notation to ensure consistency\\nthroughout the paper.\\n\\n* b: $P^{\\\\pi}(s' \\\\mid s)$ represents the probability of transitioning to the next state $s'$ given the current state $s$ under the policy $\\\\pi$.\\nThe relationship between these two quantities can be expressed through the Bellman equation. Given a policy $\\\\pi$, the relationship between the state transition probability $P^{\\\\pi}(s' \\\\mid s)$ and the action-selection probability $\\\\pi(a \\\\mid s)$ is as follows:\\n\\n$$\\nP^{\\\\pi}(s' \\\\mid s) = \\\\int_{a\\\\in \\\\mathcal{A}} \\\\pi(a \\\\mid s) P(s' \\\\mid s, a)\\n$$\\n\\n* c: To avoid confusion with the constant $\\\\pi$, we use $\\\\pi\\\\_{policy}$ to represent the policy. The policy follows a normal distribution, and we modify the expression as follows:\\n\\n$$\\n\\\\pi\\\\_{policy}(a) = \\\\frac{1}{\\\\sqrt{2 \\\\pi} \\\\sigma} \\\\exp \\\\left( -\\\\frac{(a - \\\\mu)^2}{2 \\\\sigma^2} \\\\right), \\\\quad \\\\mu < 0.\\n$$\\n\\n* d: We have added a definition of inverse dynamics of inner policy in the paper. The inverse dynamics model $p_{\\\\text{inv}}^{\\\\phi}(e\\\\mid s, s')$is a function that predicts the inner action $e$ required to transition from a current state $s$ to a next state $s^{\\\\prime}$. \\n\\n\\n* e,f,g: We acknowledge the missing expectation notation over the state $s$ in equations and have revised the paper to include it. \\nAll instances have been uniformly corrected in the revised version to address these issues. \\n\\n* h: Yes, the $x$-variable in the equation corresponds to the inner action $e$ . We map the inner action $e $ to the action $a$ using the piecewise linear function $f$. \\n\\n* i,j: We have standardized the notation throughout the paper to avoid any confusion.\\n\\n**W4: Discussion on the gap between action entropy and the next-state entropy is unused afterwards.**\\n\\n**A for W4:** As you pointed out, our discussion on the gap between action entropy and next-state entropy theoretically derives the condition under which non-redundant policies (and action spaces) would make them strictly equivalent. However, this condition is very strict and difficult to satisfy in cases of stochastic transitions. \\n\\nThis discussion motivates us to propose our approach in Section 5.\\nInstead of pursuing strict equivalence, we quantify the gap between action entropy and next-state entropy using a parameterized action mapping layer. We then use optimization to gradually minimize this gap, providing a more practical solution.\"}", "{\"comment\": \"Thank you for the authors' response! Considering the potential challenges in providing a rigorous mathematical analysis for Q2, the reviewer suggests revising the corresponding section to incorporate a heuristic assumption as the basis for the subsequent analysis\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your thoughtful feedback on our paper. With only two days remaining in the discussion period, we kindly ask that you review our responses to ensure we have fully addressed your concerns. If you find our responses satisfactory, we would greatly appreciate it if you could reconsider your rating/scoring.\\n\\nYour engagement and constructive input have been invaluable, and we truly appreciate your time and effort in supporting this process.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer ZUw7 (2)\", \"comment\": \"**W5: The entropy of the next state is never shown.**\\n\\n**A for W5:** We use a toy example similar to Section 4 where we calculate the next-state entropy for both MNSE and SAC methods. The table below shows the next-state entropy across different steps (from 10k to 150k).\\n\\nFrom the table, it is clear that both MNSE and SAC show an initial rise in next-state entropy (NSE), which promotes exploration. After the model converges to an optimal solution, the NSE decreases, reinforcing exploitation. Our method (MNSE) quickly reaches a high next-state entropy, demonstrating stronger exploration capabilities than SAC, and is able to converge more quickly to the optimal solution.\\n\\n\\n| Step | MNSE | SAC |\\n|--------|--------------------|--------------------|\\n| 10k | 0.5258 | 0.5258 |\\n| 20k | 1.9093 | 1.0361 |\\n| 30k | 2.4005 | 1.2421 |\\n| 40k | 2.5332 | 1.4249 |\\n| 50k | 2.3982 | 1.5890 |\\n| 60k | 1.3882 | 1.6652 |\\n| 70k | 0.7042 (converge) | 1.6747 |\\n| 80k | 0.0326 | 1.4930 |\\n| 90k | 0.1232 | 1.4582 |\\n| 100k | 0.1492 | 1.3715 |\\n| 110k | 0.0722 | 1.0625 |\\n| 120k | 0.1111 | 0.7042 (converge) |\\n| 130k | 0.0072 | 0.0722 |\\n| 140k | 0.0078 | 0.1111 |\\n| 150k | 0.0525 | 0.0490 |\\n\\nTable 1. Next-state entropy of MNSE and SAC.\\n\\n\\n**Q1.1: Why not simply use the log likelihood of the inverse dynamics model as intrinsic reward?**\\n\\n**A for Q1.1:** MinRed method [7] uses the log likelihood of the inverse dynamics model as intrinsic reward and we use it as a baseline in our paper. \\nHowever, we demonstrate that our approach outperforms MinRed as Fig. 3 shown. \\nThe key advantage of our method lies in introducing an action mapping layer after the inner action, which directly shapes the action space. \\nIn contrast to MinRed, which uses additional intrinsic reward to train the agent, our forward approach is more direct and effective.\\n\\n**Q1.2: Why not simply learning a forward model of the MDP, and using the log likelihood of that model as intrinsic reward?**\\n\\n**A for Q1.2:** In high-dimensional state spaces, directly learning a forward model of the MDP is extremely challenging. In comparison to the state space, the action space is relatively smaller, making the training and application of inverse dynamics models more common and feasible, as seen in prior works [8, 9].\\n\\n\\n**Q2 \\\\& Q3: What is the advantage of using the piecewise linear function in our method rather than a normalizing flow as action transformation and What is the advantage for using the discretized multinomial distribution rather than a normalizing flow (or auto-encoder + ELBO for learning) as the inverse dynamics model?**\\n\\n**A for Q2 \\\\& Q3:**\\nIn our theoretical derivation, any invertible mapping can serve as the action mapping function.\\nOn the other hand, the inverse dynamics model can be estimated using various approaches.\\nIn our experiments, the piecewise linear function and the discretized multinomial distribution have demonstrated performance surpassing the baseline.\\nWe appreciate the reviewer\\u2019s suggestion and plan to explore the integration of normalizing flows and auto-encoder + ELBO in future work to tackle more complex problems.\\n\\n[1] Lee, L., Eysenbach, B., Parisotto, E., Xing, E., Levine, S., & Salakhutdinov, R. (2019). Efficient exploration via state marginal matching. arXiv preprint arXiv:1906.05274.\\n\\n[2] Guo, Z. D., Azar, M. G., Saade, A., Thakoor, S., Piot, B., Pires, B. A., ... & Munos, R. (2021). Geometric entropic exploration. arXiv preprint arXiv:2101.02055.\\n\\n[3] Islam, R., Ahmed, Z., & Precup, D. (2019). Marginalized state distribution entropy regularization in policy optimization. arXiv preprint arXiv:1912.05128.\\n\\n[4] Hazan, E., Kakade, S., Singh, K., & Van Soest, A. (2019, May). Provably efficient maximum entropy exploration. In International Conference on Machine Learning (pp. 2681-2691). PMLR.\\n\\n[5] Liu, H., & Abbeel, P. (2021). Behavior from the void: Unsupervised active pre-training. Advances in Neural Information Processing Systems, 34, 18459-18473.\\n\\n[6] Understanding the impact of entropy on policy optimization[C]//International conference on machine learning. PMLR, 2019: 151-160.\\n\\n[7] Action redundancy in reinforcement learning[C]//Uncertainty in Artificial Intelligence. PMLR, 2021: 376-385.\\n\\n[8] Estimating q (s, s\\u2019) with deep deterministic dynamics gradients[C]//International Conference on Machine Learning. PMLR, 2020: 2825-2835.\\n\\n[9] Learning action representations for reinforcement learning[C]//International conference on machine learning. PMLR, 2019: 941-950.\"}", "{\"title\": \"Response to Reviewer ZUw7 (2/2)\", \"comment\": \"[1] Liu, H., & Abbeel, P. (2021). Behavior from the void: Unsupervised active pre-training. Advances in Neural Information Processing Systems, 34, 18459-18473.\\n\\n[2] Lee, L., Eysenbach, B., Parisotto, E., Xing, E., Levine, S., & Salakhutdinov, R. (2019). Efficient exploration via state marginal matching. arXiv preprint arXiv:1906.05274.\\n\\n\\n[3] Ziebart B D. Modeling purposeful adaptive behavior with the principle of maximum causal entropy[M]. Carnegie Mellon University, 2010.\\n\\n[4] Haarnoja T, Tang H, Abbeel P, et al. Reinforcement learning with deep energy-based policies[C]//International conference on machine learning. PMLR, 2017: 1352-1361.\\n\\n[5] Fox R, Pakman A, Tishby N. Taming the noise in reinforcement learning via soft updates[J]. arXiv preprint arXiv:1512.08562, 2015.\\n\\n[6] Haarnoja T, Zhou A, Abbeel P, et al. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor[C]//International conference on machine learning. PMLR, 2018: 1861-1870.\\n\\n[7] Haarnoja T, Zhou A, Hartikainen K, et al. Soft actor-critic algorithms and applications[J]. arXiv preprint arXiv:1812.05905, 2018.\\n\\n[8] Ahmed Z, Le Roux N, Norouzi M, et al. Understanding the impact of entropy on policy optimization[C]//International conference on machine learning. PMLR, 2019: 151-160.\\n\\n[9] Laskin M, Yarats D, Liu H, et al. Urlb: Unsupervised reinforcement learning benchmark[J]. arXiv preprint arXiv:2110.15191, 2021.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your thoughtful feedback on our paper. With only two days remaining in the discussion period, we kindly ask that you review our responses to ensure we have fully addressed your concerns. If you find our responses satisfactory, we would greatly appreciate it if you could reconsider your rating/scoring.\\n\\nYour engagement and constructive input have been invaluable, and we truly appreciate your time and effort in supporting this process.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Additional remarks\", \"comment\": \"Dear authors, tank you for responding and updating the manuscript. I still have several remarks and questions that I would appreciate if you clarified.\\n\\n1. Thank you for including papers about exploration with the marginal state entropy. I believe that you still misinterpret some of this literature. First, while most paper indeed focus on pretraining, state exploration can also be applied on-line when learning. Second, I am not convinced about the argument used in your response to justify that your approach is much different in phylosophy. I still think that an algorithm pursuing this exploration objective would strengthen your experimental setting.\\n2. There are still some equations that may contain mistakes:\\n\\na. In equation (9), I think that $s$, $s'$ and $e$ are undefined.\\n\\nb. In equation (10), $s_t$ is undefined and should probably be taken under the expectation.\\n\\nc. In equation (12), the relation between $a$ and $e$ is undefined.\\n\\nd. I am a bit confused why equation (11) is written as a sum and the others are not.\\n\\n3. The article would benefit to include a proper discussion and illustration of the next state entropy.\\n4. It is unfair to claim (in your response) that your approach is mode-free when you learn the inverse dynamics.\"}", "{\"title\": \"Looking forward to further comments!\", \"comment\": \"Dear Reviewer,\\n\\nWe have added additional explanations and experiments for our methods. We are wondering if our response and revision have cleared your concerns. We would appreciate it if you could kindly let us know whether you have any other questions. We are looking forward to comments that can further improve our current manuscript. Thanks!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer pNrB\", \"comment\": \"Dear Reviewer,\\n\\nThanks for your valuable comments.\\nWe hope the following statement can address your concern.\\nWe add supplementary experimental results as part of our response.\\n\\n**W1: Analyzing performance across a broader range of EAP settings.**\\n\\n**A for W1:** \\nWe conducted additional experiments to analyze the performance at EAP values of 20\\\\%, 40\\\\%, 60\\\\%, 80\\\\%, and 100\\\\%. This analysis provides a more comprehensive view of the algorithm's robustness and adaptability to different entropy thresholds.\", \"the_results_is_shown_as_follows\": \"| **Environment / EAP** | **MNSE (ours)** | **SAC** | **NovelD** | **MinRed** |\\n|------------------------|-------------------------|-----------------------|-------------------------|------------------------|\\n| **Ant** | | | | |\\n| 100\\\\% | 3318.60 \\u00b1 940.35 | 3354.58 \\u00b1 556.52 | **4517.24 \\u00b1 941.31** | 3076.22 \\u00b1 837.44 |\\n| 80\\\\% | 3590.74 \\u00b1 1169.71 | 3477.07 \\u00b1 979.73 | **3652.62 \\u00b1 307.70** | 3053.01 \\u00b1 852.90 |\\n| 60\\\\% | **3499.42 \\u00b1 842.01** | 3328.57 \\u00b1 544.72 | 3129.77 \\u00b1 258.92 | 2639.14 \\u00b1 199.72 |\\n| 40\\\\% | **3586.25 \\u00b1 681.38** | 1526.37 \\u00b1 1162.03 | 2600.26 \\u00b1 395.85 | 1761.25 \\u00b1 229.65 |\\n| 20\\\\% | **2820.91 \\u00b1 518.60** | -7.60 \\u00b1 1.88 | 1219.92 \\u00b1 45.86 | 920.87 \\u00b1 476.22 |\\n| **HalfCheetah** | | | | |\\n| 100\\\\% | **10162.46 \\u00b1 364.30** | 9535.45 \\u00b1 100.47 | 9860.49 \\u00b1 255.37 | 8626.04 \\u00b1 963.10 |\\n| 80\\\\% | **8482.84 \\u00b1 775.07** | 8019.85 \\u00b1 614.34 | 7110.31 \\u00b1 412.67 | 8451.85 \\u00b1 877.56 |\\n| 60\\\\% | **8594.27 \\u00b1 1142.66** | 8540.62 \\u00b1 891.91 | 6679.57 \\u00b1 818.11 | 6997.12 \\u00b1 1844.33 |\\n| 40\\\\% | **7824.08 \\u00b1 1190.98** | 4750.61 \\u00b1 2762.26 | 4082.68 \\u00b1 1668.43 | 5340.42 \\u00b1 2207.86 |\\n| 20\\\\% | **8389.90 \\u00b1 999.66** | 5340.42 \\u00b1 2207.86 | 4291.25 \\u00b1 771.95 | 4442.10 \\u00b1 1719.52 |\\n| **Hopper** | | | | |\\n| 100\\\\% | 3265.50 \\u00b1 165.32 | 3173.27 \\u00b1 207.26 | **3834.36 \\u00b1 346.68** | 2917.46 \\u00b1 426.55 |\\n| 80\\\\% | 3254.78 \\u00b1 172.49 | 2750.05 \\u00b1 224.12 | **3858.44 \\u00b1 436.45** | 3064.54 \\u00b1 794.75 |\\n| 60\\\\% | **3120.39 \\u00b1 158.73** | 2417.63 \\u00b1 457.04 | 2809.51 \\u00b1 265.54 | 2753.56 \\u00b1 636.98 |\\n| 40\\\\% | **3082.22 \\u00b1 180.65** | 2394.08 \\u00b1 160.21 | 2648.36 \\u00b1 199.94 | 2763.01 \\u00b1 655.70 |\\n| 20\\\\% | **3013.21 \\u00b1 149.09** | 2421.67 \\u00b1 160.92 | 2439.52 \\u00b1 665.22 | 2743.68 \\u00b1 434.33 |\\n\\nTable 1. Performance across a broader range of EAP settings.\\n\\n\\n**W2: Further experiments focused on pure exploration tasks\\u2014such as in maze environments would be valuable.**\\n\\n**A for W2:** \\nWe conducted additional experiments with EAP=20\\\\% on pure exploration tasks in maze environments, including maze2d-umaze-v0, maze2d-medium-v0, and maze2d-large-v0.\\nThe results demonstrate that our method outperforms the baseline across these environments.\\n\\n| environment | MNSE | SAC | NovelD | MinRed |\\n|--------|--------------------|--------------------|--------------------|--------------------|\\n| maze2d-umaze-v0 | **41.00\\u00b11.14** | 35.94\\u00b11.34 | 37.32\\u00b11.12 |32.66\\u00b12.54 |\\n| maze2d-medium-v0 | **35.72\\u00b11.84** | 26.24\\u00b11.00 | 32.60\\u00b10.93 |18.14\\u00b11.33 |\\n| maze2d-large-v0 | **66.92\\u00b18.39** | 52.69\\u00b12.89 | 59.77\\u00b13.28 |53.82\\u00b16.87 |\\n\\nTable 2. Additional experimental results in maze environments.\\n\\n\\n**W3: Comparisons with more recent approaches (from 2022 or 2023) could further strengthen the relevance and appeal of this work.**\\n\\n**A for W3:** \\nYes, we have added E3B (2022) [1] as a comparison method in our experiments. The results of these comparisons are provided below.\\n\\n|Method| Ant (EAP=0.4) | HalfCheetah (EAP=0.4) | Hopper (EAP=0.2) | Walker2d (EAP=0.2) | Humanoid (EAP=0.2) |\\n|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|\\n| E3B | 2850.22\\u00b1363.84 | 3862.70\\u00b1894.38 | 2708.26\\u00b1683.06 | 3465.53\\u00b179.49 | 3174.87\\u00b1233.20 |\\n| MNSE(Ours) | **3586.25\\u00b1681.38** | **7824.08\\u00b11190.98** | **3013.21\\u00b1149.09** | **4240.30\\u00b1484.45** | **5431.75\\u00b1460.25** |\\n\\nTable 3. Comparisons with E3B(2022) in MuJoCo.\"}", "{\"summary\": \"This paper theoretically highlights the distinction between policy entropy and next-state entropy in Markov Decision Processes (MDPs) and makes a compelling argument that the two entropies are equivalent if the policy is non-redundant---meaning that different actions lead to different next states given the same state in the MDP. The paper then shifts its focus to demonstrating the advantages of incorporating maximum next-state entropy into the reinforcement learning process in MDPs. This is done by deliberately introducing saturation and deadzone effects into the control system to create redundant policies. Numerous experiments demonstrate their method can outperform the baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and clear, with a solid theoretical analysis and a sufficient set of experiments.\", \"weaknesses\": \"1. In the introduction, the authors present equipment aging as an example of redundancy in the action space. However, this example does not fully convince the reviewer. Specifically, reinforcement learning assumes that the Markov Decision Process (MDP) remains consistent. When changes occur in the action space, such as those caused by aging equipment, the previously learned policy may no longer perform effectively within the altered MDP. Further clarification or a more suitable example might strengthen this argument.\\n\\n2. There is an abuse of notation for reward function r(s,a) while the paper assumes the reward is only affected by states.\", \"questions\": \"1. Based on the theory, when effective action proportion is 100%, the MNSE should have the same performance as SAC (the base model adopted by MNSE). But in Fig 4. there are some differences, any explanation or analysis on this observation?\\n\\n2. For eq (4), how can we guarantee the mu variable is positive related with the entropy? The reviewer did not see any further analysis on this part (not even in the appendix).\\n\\n3. All experiments are conducted in the continuous action space, will maximum next-state entropy benefit the policy learning in discrete action space environments?\\n\\n4. Can you explain in more details on how to derive the content of equation (8)?\\n\\n5. To minimize the gap term in Theorem 5.1, step 1 and 2 are provided in equation (11) and (12). Why the parameters of the inverse dynamic of inner policy are optimized first, instead of the ones of the mapping layer?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for raising the score to 6!\", \"comment\": \"We would like to thank the reviewer for raising the score to 6! We also appreciate the valuable comments, which helped us significantly improve the paper's strengths.\"}", "{\"title\": \"Looking forward to further comments!\", \"comment\": \"Dear Reviewer,\\n\\nWe have added additional explanations for our methods. We are wondering if our response and revision have cleared your concerns. We would appreciate it if you could kindly let us know whether you have any other questions. We are looking forward to comments that can further improve our current manuscript. Thanks!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer pNrB (2)\", \"comment\": \"**Q1 \\\\& Q2: Why choose SAC as backbone framework? Why DSPG, CBSQL, and max-min entropy framework were not included as baseline in the experiment section?**\\n\\n**A for Q1 \\\\& Q2:** \\nSAC is a successor to Soft Q-Learning (SQL) and integrates techniques such as double Q-learning and temperature adjusting (based on stable-baselines3). It is one of the most widely used maxEnt RL algorithms in the community. We believe that making improvements and developments based on the SAC algorithm is highly significant for the community.\\n\\nThe DSPG method (2019) introduced pioneering techniques like DoubleSampling into soft Q-learning. However, the SAC framework (based on stable-baselines3, 2021) has already integrated and refined techniques like double Q-learning.\\n\\nThe CBSQL method (2021) introduces pseudo-count density estimation to Soft Q-Learning, enabling more precise temperature adjustment. However, this method was only evaluated in discrete action spaces (Atari), and it does not show significant advantages in complex tasks in continuous action spaces.\\n\\nThe max-min entropy framework (2021) differs from traditional MaxEnt methods by constructing an exploration policy and separating exploration and exploitation. This added complexity makes it difficult to fairly compare this method with ours.\\n\\n\\n**Q3: The correlation between next-state entropy and the total entropy of visited states.**\\n\\n**A for Q3:** \\nHere are the **key differences** between our method (maximizing next-state entropy) and the state entropy maximizing methods mentioned:\\n\\nThe works mentioned in [2, 3, 4, 5, 6] directly construct policies (which can even be deterministic) to achieve better state coverage.\\nHowever, they do not consider the entropy of policies. \\n\\nIn contrast, our approach builds on the entropy of stochastic policies, which not only accelerates learning and prevents premature convergence to suboptimal solutions but also induces a smoother objective that connects solutions and enable the use of larger learning rates [7].\\n\\nNext-state entropy is an extension of policy entropy in environments with redundant action spaces.\\nPolicy entropy encourages action diversity while next-state entropy extends this idea to measure the diversity of effects caused by actions. \\nOur method inherits the benefits of policy entropy while overcoming inefficiencies caused by action redundancy.\\n\\n[1] Henaff M, Raileanu R, Jiang M, et al. Exploration via elliptical episodic bonuses[J]. Advances in Neural Information Processing Systems, 2022, 35: 37631-37646.\\n\\n[2] Lee, L., Eysenbach, B., Parisotto, E., Xing, E., Levine, S., & Salakhutdinov, R. (2019). Efficient exploration via state marginal matching. arXiv preprint arXiv:1906.05274.\\n\\n[3] Guo, Z. D., Azar, M. G., Saade, A., Thakoor, S., Piot, B., Pires, B. A., ... & Munos, R. (2021). Geometric entropic exploration. arXiv preprint arXiv:2101.02055.\\n\\n[4] Islam, R., Ahmed, Z., & Precup, D. (2019). Marginalized state distribution entropy regularization in policy optimization. arXiv preprint arXiv:1912.05128.\\n\\n[5] Hazan, E., Kakade, S., Singh, K., & Van Soest, A. (2019, May). Provably efficient maximum entropy exploration. In International Conference on Machine Learning (pp. 2681-2691). PMLR.\\n\\n[6] Liu, H., & Abbeel, P. (2021). Behavior from the void: Unsupervised active pre-training. Advances in Neural Information Processing Systems, 34, 18459-18473.\\n\\n[7] Ahmed Z, Le Roux N, Norouzi M, et al. Understanding the impact of entropy on policy optimization[C]//International conference on machine learning. PMLR, 2019: 151-160.\"}", "{\"title\": \"Looking forward to further comments!\", \"comment\": \"Dear Reviewer,\\n\\nWe have added additional explanations and experiments for our methods. We are wondering if our response and revision have cleared your concerns. We would appreciate it if you could kindly let us know whether you have any other questions. We are looking forward to comments that can further improve our current manuscript. Thanks!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your thoughtful feedback on our paper. With only two days remaining in the discussion period, we kindly ask that you review our responses to ensure we have fully addressed your concerns. If you find our responses satisfactory, we would greatly appreciate it if you could reconsider your rating/scoring.\\n\\nYour engagement and constructive input have been invaluable, and we truly appreciate your time and effort in supporting this process.\\n\\nBest regards,\\n\\nAuthors\"}" ] }
0FxnSZJPmh
Physics-Informed Deep Inverse Operator Networks for Solving PDE Inverse Problems
[ "Sung Woong Cho", "Hwijae Son" ]
Inverse problems involving partial differential equations (PDEs) can be seen as discovering a mapping from measurement data to unknown quantities, often framed within an operator learning approach. However, existing methods typically rely on large amounts of labeled training data, which is impractical for most real-world applications. Moreover, these supervised models may fail to capture the underlying physical principles accurately. To address these limitations, we propose a novel architecture called Physics-Informed Deep Inverse Operator Networks (PI-DIONs), which can learn the solution operator of PDE-based inverse problems without any labeled training data. We extend the stability estimates established in the inverse problem literature to the operator learning framework, thereby providing a robust theoretical foundation for our method. These estimates guarantee that the proposed model, trained on a finite sample and grid, generalizes effectively across the entire domain and function space. Extensive experiments are conducted to demonstrate that PI-DIONs can effectively and accurately learn the solution operators of the inverse problems without the need for labeled data.
[ "Inverse Problems", "Stability", "Operator Learning", "Physics-Informed Machine Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=0FxnSZJPmh
https://openreview.net/forum?id=0FxnSZJPmh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vCRBXUOUwC", "skNLjkWtom", "np2kgVruyo", "k9OFNSQRxa", "gPpl0pFuD7", "eGsI9cTax0", "dWNro0K5O6", "dSw7vXr6xw", "WeVQAFaBkV", "UcYOTycNoP", "QisZhg66cG", "QM9LTi7B9c", "PoAG2WcglH", "NFH6GtWZWe", "LV8Nxfz4pe", "KBdSC5E3Ey", "K1CJdSDxH1", "JgmMmfUmR2", "Iycs2ct37J", "DcWqkoxrp6", "CEg1cvslMi", "6vYuddPMMR", "4g3gXrkyCg", "3KzLiOm06o", "2vYQuyeRCh" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732961474419, 1733195536392, 1731649099098, 1732164408506, 1732531796365, 1730714934287, 1732955560506, 1732277716469, 1731648118792, 1732610723635, 1732537605900, 1731649088358, 1732978490484, 1732268452386, 1732495194444, 1731648842479, 1732610670473, 1731648855750, 1737523745525, 1731649232832, 1730345855472, 1729685885260, 1732160321167, 1734726521839, 1732537576936 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6116/Reviewer_3Fuq" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Reviewer_3Fuq" ], [ "ICLR.cc/2025/Conference/Submission6116/Reviewer_Y92e" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Reviewer_knEo" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ], [ "ICLR.cc/2025/Conference/Submission6116/Reviewer_3Fuq" ], [ "ICLR.cc/2025/Conference/Submission6116/Reviewer_knEo" ], [ "ICLR.cc/2025/Conference/Submission6116/Reviewer_Y92e" ], [ "ICLR.cc/2025/Conference/Submission6116/Area_Chair_f5h7" ], [ "ICLR.cc/2025/Conference/Submission6116/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reply to further response to reviewer 3Fuq regarding additional experiments\", \"comment\": \"Thank you for your feedback on additional experiments for irregular measurement input. Now, the proposed method is able to dealing with irregular sensor input. Accordingly, I have upgraded my score to 5.\"}", "{\"title\": \"Appreciation to reviewers\", \"comment\": \"As we conclude this discussion, we would like to express our sincere gratitude to the reviewers for their time and effort in evaluating our paper. Your valuable feedback has been immensely helpful, guiding us to make significant improvements.\\n\\nThe key revisions made in response to the reviewers' feedback are as follows:\\n\\n1. We have **revised the loss function** to incorporate the weights $\\\\lambda$'s and conducted various experiments to illustrate effective methods for selecting appropriate $\\\\lambda$'s.\\n2. We have carried out additional experiments, including **sensativity analysis** and **ablation studies** as recommended by the reviewers.\\n3. We proposed a simple yet effective variation of PI-DION, named **variable-input PI-DIONs**, to handle the irregular measurement case, as suggested by the reviewers. The experimental results demonstrate that variable-input PI-DIONs can effectively handle the irregular measurement data. \\n\\nWe believe that the revised manuscript offers a meaningful contribution to the ICLR community and sincerely hope it meets the high standards expected by the community.\"}", "{\"title\": \"Continued\", \"comment\": \"Response to Questions:\\n\\n1. As you mentioned, FNO and DeepONet are typically applied to forward problems. In our experiments, both models were trained in a fully supervised setting to learn the inverse mapping $u|_{\\\\Omega_m} \\\\rightarrow f$. PI-DIONs, on the other hand, were trained in both supervised and unsupervised settings for comparison. \\n\\n2. Thank you for highlighting the loss function with labeled training data. We have now clearly defined the loss function for supervised PI-DIONs at the beginning of Section 4 to address this concern. We appreciate your valuable feedback, which has helped improve the clarity and presentation of our work.\"}", "{\"title\": \"Thank you\", \"comment\": \"We sincerely appreciate your thoughtful consideration. Please feel free to reach out if you have any further questions or would like to discuss anything further.\"}", "{\"title\": \"Feedback from reviewer 3Fuq\", \"comment\": \"Thank you very much for your detailed responses. The responses have addressed my questions on loss functions, theoretical novelty and some training details. I still have the following concerns.\\n1. The current architecture is a variant of DeepONet and lacks the flexibility to deal with sensor data with varying number and locations. This issue is of crucial importance for inverse problems, since in practical scenarios it is unreasonable to fix the number and locations of sensors in advance. \\n2. For the comparison with PINNs, the training times of PINNs in table 4 are as least 2 hours. In my own experiences, for the Reaction Diffusion equation, PINN takes much shorter time to converge on a 3090 GPU for a single instance. How many epochs did you use? And how many samples out of 1,000 did you use to get the accuracy of PINNs? \\n3. Training 1,000 samples involves 2.000 loss terms, and you did not mention stochastic training using batches. Have you ever encountered convergence failure during training due to too many loss terms? You mentioned convergence issues in your future work, does it merely mean improving convergence rate?\\n4. In comparison with your method PI-DIONs, PINN is flexible on the number and locations of sensor data and does not need training samples. Some meta-learning approaches have been proposed to address the retraining issue, e.g. [1]. PI-DION needs a lot of training samples and the accuracy is lower than PINN (from table 1 and table 4, the unsupervised case of PI-DION is much less accurate than PINN). Moreover, it is hard for PI-DION to generalize to samples that are far different from training samples. Could you please give more advantages of your method over PINNs besides the inference speed?\\n\\n[1] Maryam Toloubidokhti, Yubo Ye, Ryan Missel, Xiajun Jiang, Nilesh Kumar, Ruby Shrestha, and Linwei Wang. Dats: Difficulty-aware task sampler for meta-learning physics-informed neural networks. In The Twelfth International Conference on Learning Representations, 2024\"}", "{\"summary\": \"The authors propose Physics-Informed Deep Inverse Operator Networks (PI-DIONs), a novel architecture for solving PDE inverse problems without requiring labeled data. Theoretically, the authors extend stability estimates from traditional inverse problem theory to the operator learning setting, and prove universal approximation theorems for PI-DIONs. Empirically, the authors validate their proposed approach through experiments on reaction-diffusion equations, Helmholtz equations, and Darcy flow, achieving SOTA performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper engages with an important problem in SciML, learning to solve inverse problems based on physics losses without additional training data.\", \"The theoretical results are quite interesting. The authors extend standard stability estimates for inverse problems to the operator learning setting. Promisingly, the theorems apply to the reaction-diffusion equation and the Helmholtz equation, two standard benchmarks in the literature.\", \"The proposed method is simple and presented clearly and generally.\", \"The empirical results are promising. On three standard benchmarks, the authors demonstrate SOTA performance of supervised learning and near-SOTA of unsupervised learning, compared to supervised DeepONet and FNO.\"], \"weaknesses\": [\"The main weakness of the paper is that the empirical results, although promising, are relatively limited and could benefit from some clarification:\", \"In Table 1, PI-DION in the supervised learning setting (with 1k training examples) is shown to outperform two different DeepONets and FNOs. However, it's a bit unclear from the paper why this is true, and additional clarification about this would be helpful. Is there a difference in the model architecture / training objective / optimizer between the DeepONets and PI-DION in the supervised setting?\", \"See questions for more.\"], \"questions\": [\"Questions about experimental results:\", \"What are the number of parameters of each of the models in Table 1?\", \"Could the authors provide a sensitivity analysis showing how performance changes as the relative weighting between physics and data losses is varied? This would provide valuable insight into the method's robustness.\", \"How does PI-DION compare to other methods for solving inverse problems, e.g. Neural Inverse Operators [1]?\", \"Any explanation about why the performance hit between supervised and unsupervised PI-DION is larger for Darcy Flow and Helmholtz equation than for reaction-diffusion?\", \"How limiting is the assumption that there exists stability estimates for the inverse problem?\", \"How well do the theoretical bounds from Theorems 2, 3 match the empirical results of Table 1 (reaction-diffusion and Helmholtz)?\", \"1. Neural Inverse Operators for Solving PDE Inverse Problems\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Request for Further Discussion\", \"comment\": \"We hope this message finds you well. We are writing to thank you for your valuable feedback on our manuscript. Your insights have been incredibly helpful, and we deeply appreciate the time and effort you have dedicated to reviewing our work.\\n\\nAs the discussion period is ending soon, (December 2) we are awaiting your feedback on our revised manuscript. We would also like to know if the additional experiment on the variable input case helps address your concerns. If possible, we would greatly appreciate the opportunity to have a further discussion with you to address any remaining concerns. \\n\\nThank you once again for your guidance and support. we look forward to your response.\"}", "{\"title\": \"Thank you\", \"comment\": \"We sincerely appreciate your thoughtful consideration. Please feel free to reach out if you have any further questions or would like to discuss anything further.\"}", "{\"title\": \"General response to reviewers\", \"comment\": \"We sincerely thank the reviewers for their thoughtful and constructive feedback on our manuscript. The comments were both insightful and helpful, enabling us to make significant improvements. In our revisions, we have clarified the notation and explanations, as well as added further details, including the number of parameters, the relative weights, and the training and inference times, which are now provided in Appendices B and C. Furthermore, we have conducted additional experiments, detailed in Appendix C, to highlight and emphasize our contributions. For more specific updates, please refer to the individual responses.\\n\\nWe believe that most of the weaknesses and questions have been thoroughly addressed, and we look forward to further discussions.\"}", "{\"title\": \"Further response to reviewer 3Fuq regarding additional experiments\", \"comment\": \"We believe that the incorporation of variable measurement points into PI-DIONs is a significant and valuable extension as you mentioned. To address this, we conducted additional experiments adopting the architecture of variable-input deep operator networks [1]. These experiments were performed on a new dataset with irregularly distributed measurement points. The results showed a comparable relative error (about 3.83%) to the vanilla PI-DIONs, demonstrating that PI-DIONs can be effectively generalized to handle cases with irregular measurement points. We also think the error can be further reduced with careful fine-tuning.\\n\\nThe results and architecture are summarized in Appendix C.3. We\\u2019ve highlighted the changes in blue. \\n\\nWe are grateful for the valuable discussion, which has further improved the manuscript. Please feel free to reach out if you have any questions.\\n\\n\\n[1] Variable-input Deep Operator Networks. Michael Prasthofer, Tim De Ryck, Siddhartha Mishra. 2022.\"}", "{\"title\": \"Continued\", \"comment\": \"**Q. In comparison with your method PI-DIONs, PINN is flexible on the number and locations of sensor data and does not need training samples. Some meta-learning approaches have been proposed to address the retraining issue, e.g. [1]. PI-DION needs a lot of training samples and the accuracy is lower than PINN (from table 1 and table 4, the unsupervised case of PI-DION is much less accurate than PINN). Moreover, it is hard for PI-DION to generalize to samples that are far different from training samples. Could you please give more advantages of your method over PINNs besides the inference speed?**\\n\\nA. We are truly grateful for your valuable comment. As you kindly provided, there are some meta-learning approaches to overcome the retraining issue, however, [1] only focuses on the parametric PDE, i.e., multiple forward problems. As a response to your concern, we want to emphasize that our focus is on developing a physics-informed operator network for solving inverse problems that can be trained in a **fully unsupervised manner**, with **immediate inference** once trained, and **theoretical convergence guarantees**. This is the major differentiation of PI-DIONs from such PINN algorithms. \\n\\nAs a minor difference, the unknowns are functions in our case, while [1] considered the scalar variable $\\\\lambda$. Additionally, they also assumed a certain distribution for $\\\\lambda$, which may result in an inaccurate solution for an out-of-distribution sample. \\n\\n[1] Maryam Toloubidokhti, Yubo Ye, Ryan Missel, Xiajun Jiang, Nilesh Kumar, Ruby Shrestha, and Linwei Wang. Dats: Difficulty-aware task sampler for meta-learning physics-informed neural networks. In The Twelfth International Conference on Learning Representations, 2024\\n\\n\\nWe hope that your concerns have been properly addressed and look forward to further discussion. Thank you once again.\"}", "{\"title\": \"Response to reviewer 3Fuq\", \"comment\": \"Thank you for clearly summarizing our contributions and highlighting the strengths of our work.\", \"response_to_weaknesses\": \"1. Thank you for the helpful feedback regarding the term on the right-hand side in Line 242. To improve clarity, we have revised the explanation by bringing the term $\\\\Vert f-f^*\\\\Vert_{L^2(\\\\Omega_m)}$ to the forefront of inequality. Additionally, we have enhanced the logical flow by adding more details to better connect the inequalities in Lines 233-237 with the final inequality in Lines 242-244. We appreciate your effort in identifying the potential source of confusion.\\n\\n2. Thank you for your insightful comment regarding inputs with variable counts and locations. The limitation in handling such inputs arises from PI-DION's reliance on the fundamental DeepONet framework. However, recent advancements, such as Variable Input Deep Operator Network [1] or GraphDeepONet [2] address this issue through aggregation functions with softmax. Incorporating these methodologies into the branch network could effectively resolve the above limitation. We have included this promising direction in the discussion section. We sincerely appreciate your valuable feedback.\\n\\n3. Thank you for suggesting a comparison with PINNs. As highlighted in Section 2, this study was motivated by the limitation that physics-informed loss functions cannot be directly applied to inverse problems in operator learning methods like DeepONet and FNO. To address this, we developed the novel architecture presented in Figure 1.\\nWhile PINNs are effective for inverse problems, they require retraining from scratch for each new set of partial measurements, leading to substantial computational costs. In contrast, PI-DIONs enable real-time inference after a single training phase. As shown in Table 5 (Appendix C.1), PI-DIONs achieve significantly faster inference times than PINNs, with only a slight increase in relative error, which is negligible (see Table 4 in Appendix C.1).\\n\\n4. Thank you for suggesting ablation studies to further justify our model. Training details, including hardware specifications, are provided in Appendix B. In response to your feedback, we have now included training times in Table 5 (Appendix C.1) and the prediction errors for different sample sizes of 100, 500, 1000, and 2000 in Table 7 (Appendix C.2). As observed, approximately 1000 samples are required to achieve a relative error of about 1% for the target function, although smaller sample sizes may simplify optimization.\\n\\n5. Thank you for clarifying the theoretical contribution of our works. As you mentioned, we extended existing theoretical results from prior research. However, earlier works primarily focused on physics-informed neural networks (PINNs) for predicting the solution and target function from a single fixed partial measurement of the solution. In contrast, our work extends these results to an operator learning framework, providing stability estimates for predicting the solution and target function from any given partial measurement. \\nSpecifically, Theorems 1 and 2 imply that, with sufficient measurements and samples, the difference between the continuous loss function $\\\\widetilde{\\\\mathcal{L}}(=\\\\widetilde{\\\\mathcal{L_{physics}}}+\\\\widetilde{\\\\mathcal{L_{data}}})$ and the loss function $\\\\mathcal{L}$ in Section 2 is small. Building on this, Theorem 3 guarantees that PI-DION can accurately approximate both the solution and target function from any given partial measurement by minimizing $\\\\mathcal{L}$. Finally, Proposition 1 establishes that, for any $\\\\epsilon>0$, a PI-DION structure can be constructed to optimize $\\\\mathcal{L}$ below $\\\\epsilon$. \\nIn summary, minimizing the loss function $\\\\mathcal{L}$ in Section 2 allows PI-DION to efficiently and accurately predict both solution and target function. Since this approach can be applied to any equation with stability estimates for a single measurement, our work establishes the first general framework for physics-informed operator learning in inverse problems.\\n\\n6. Thank you for pointing out the possible confusion in the definition of $u$. We have included the definition of $u_l^{i}$ immediately after defining $\\\\mathcal{L}_{data}$ in line 155. Additionally, we have corrected the input of $f$ to be two-dimensional, specified by the points $(x, y)$. These revisions have improved the clarity of our paper, and we appreciate your feedback. \\n\\n\\n[1] Michael Prasthofer, Tim De Ryck, and Siddhartha Mishra. Variable-input deep operator networks. 2022.\\n\\n[2] Sung Woong Cho, Jae Yong Lee, and Hyung Ju Hwang. Learning time-dependent pde via graph neural networks and deep operator network for robust accuracy on irregular grids. 2024\"}", "{\"title\": \"Reply to reviewer 3Fuq\", \"comment\": \"We appreciate your timely response and updated evaluation. The discussion has been incredibly helpful.\"}", "{\"comment\": \"I appreciate the authors' response and additional experiments. I have raised my score.\"}", "{\"title\": \"Follow-up on our response to your comments\", \"comment\": \"We hope this message finds you well. We would like to kindly remind you that we have submitted our response to your review comments. As the discussion period will end in two days, we would greatly appreciate it if you could let us know whether our response has addressed your concerns.\\n\\nIf you have any additional feedback, we would be most grateful to receive it. Again, Thank you for your valuable comments and suggestions, which have been instrumental in improving our work.\"}", "{\"title\": \"Response to reviewer Y92e\", \"comment\": \"Thank you for clearly summarizing our contributions and highlighting the strengths of our work. We completely agree with the strengths you have outlined.\", \"response_to_weaknesses\": \"1. The key distinction of PI-DION lies in its loss function, $\\\\mathcal{L_{physics}}$ Directly imposing $\\\\mathcal{L_{physics}}$ \\u200b on models like DeepONet and FNO is not feasible, as they do not parameterize the solution and target function as functions of $x$. To address this limitation, we developed a novel architecture that enables the incorporation of $\\\\mathcal{L_{physics}}$ into the training process. We believe this innovative architecture and loss function are critical factors contributing to the observed performance improvements.\", \"response_to_questions\": \"1. Thank you for pointing out the missing information. We have addressed this by adding Table 3 in Appendix B, which provides the number of trainable parameters for all models listed in Table 1.\\n\\n2. Thank you for your insightful feedback. In response to your suggestion, we performed experiments by varying the relative weights between the physics and data losses across seven configurations: $\\\\(\\\\lambda_1, \\\\lambda_2) = (100, 1), (10, 1), (1, 1), (1,0.1), (0.1, 1), (1, 10), (1, 100)$ . As shown in Table 4 (Appendix C.2), the relative test error decreases as the weight assigned to the data loss increases. This observation aligns with our intuition that, during the early stages of training, a large $\\\\mathcal{L_{data}}$ will push $s_{\\\\zeta, \\\\theta}$ in the wrong direction, as $u_{\\\\eta, \\\\theta}$ differs from the true solution, leading to an inaccurate $\\\\mathcal{L_{physics}}$. We have included this discussion at the beginning of Section 4.\\n\\n3. Thank you for your valuable feedback regarding the comparison with other baselines, particularly the widely used NIO. While NIO is a promising approach, a direct comparison with our model is challenging because NIO relies on fully supervised learning for operator-to-function mapping. In contrast, PI-DION is specifically designed for function-to-function mapping and can be trained without direct supervision on the target function. To the best of our knowledge, no existing architecture aligns with this approach. Therefore, we compare our model with foundational models such as FNO and DeepONet. We appreciate your understanding of this context.\\n\\n4. Thank you for your critical question regarding the performance gap across problems. We believe the primary factor is the dimensionality of the problem. For the reaction-diffusion equation, the input function (partial measurement of the solution) is defined on a one-dimensional domain ($\\\\partial\\\\Omega$), and the target function is also defined on a one-dimensional domain ($\\\\\\\\{T\\\\\\\\}\\\\times\\\\Omega$). In contrast, the other two problems involve functions defined on two-dimensional domains.\\nAs stated in Theorem 1 and 2, higher dimensionality increases the number of samples required to approximate the continuous version of loss functions $\\\\widetilde{\\\\mathcal{L_{physics}}}$ and $\\\\widetilde{\\\\mathcal{L_{data}}}$ with their discrete versions $\\\\mathcal{L_{physics}}$ and $\\\\mathcal{L_{data}}$. This is due to the growth of constants $N_{\\\\mathcal{N}}, N_{\\\\mathcal{B}}$ and $N$ with dimensionality, given fixed bounds $R_{\\\\mathcal{N}}, R_{\\\\mathcal{B}}$ and $R$ in PI-DION.\\nAdditionally, empirical evidence indicates that solving elliptic PDEs is inherently more challenging than parabolic PDEs, which may contribute to the observed performance gap.\\n\\n5. Thank you for considering the detailed stability estimates. As mentioned in Remark 1, these estimates are valid for certain equations, such as the inverse source problem for the reaction-diffusion equation and the Helmholtz equation. However, for the permeability function in the Darcy flow equation, relevant stability estimates have not yet been established. Despite this, empirical evidence presented in Section 4 confirms that PI-DION can still be consistently applied to the Darcy flow problem with the loss function described in Lines 152 and 352. These results suggest that PI-DION shows promise for approximating various inverse operators, even in the absence of formal stability estimates.\"}", "{\"title\": \"Response to reviewers regarding additional experiments\", \"comment\": \"We sincerely appreciate your concerns regarding the variable measurement case. To address the issue, we conducted additional experiments adopting the architecture of variable-input deep operator networks [1]. The results showed a comparable relative error (about 3.83%) to the vanilla PI-DIONs, demonstrating that PI-DIONs can be effectively generalized to handle cases with irregular measurement points. We also think the error can be further reduced with careful fine-tuning.\\n\\nThe results and architecture are summarized in Appendix C.3. We\\u2019ve highlighted the changes in blue. \\n\\nWe are grateful for the valuable discussion, which has further improved the manuscript. Please feel free to reach out if you have any questions.\\n\\n\\n[1] Variable-input Deep Operator Networks. Michael Prasthofer, Tim De Ryck, Siddhartha Mishra. 2022.\"}", "{\"title\": \"Continued\", \"comment\": \"6. Thank you for the valuable feedback regarding the verification of the theoretical bounds presented in Theorems 2 and 3. These theorems provide a rigorous foundation for PI-DION, ensuring that the approximated solution and target function converge to the true solution and target as the loss function $\\\\mathcal{L}$ is computed over a sufficiently large number of samples. As demonstrated in Table 5 of Appendix C.2, the relative error decreases with an increasing sample size. Since the theorems hold with a certain probability, empirically verifying the convergence rate (i.e., the order of error on sample size) would require extensive experimentation, which represents an interesting direction for future research. We have included a comment on this in the discussion section.\\n\\n[1] Roberto Molinaro, Yunan Yang, Bj\\u00a8orn Engquist, and Siddhartha Mishra. Neural inverse operators for solving pde inverse problems. 2023.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to reviewer knEo\", \"comment\": \"Thank you for clearly summarizing our contributions and highlighting the strengths of our work.\", \"response_to_weaknesses\": \"1. To the best of our knowledge, PI-DIONs and PI-DIONs-v0 (in Appendix A) are the first physics-informed operator networks specifically designed for inverse problems. Due to structural limitations, existing architectures such as DeepONet and FNO cannot incorporate physics loss for inverse problems. A detailed description of the limitations are presented in Chapter 2 (lines 105-108). The relatively simple network utilized in this work was chosen to highlight the novelty of the architecture and loss function while ensuring a fair comparison, which we encourage you to consider.\\nIn addition to the architectural novelty, we provide rigorous theoretical justification for our method, as detailed in Section 3. Specifically, we extend stability estimates to the operator learning framework, demonstrating that sufficiently small $\\\\mathcal{L}$ implies a small prediction error for both the solution and target functions. Furthermore, we present a universal approximation theorem for PI-DIONs, which guarantees that the loss function can be reduced to an arbitrarily small value. These theoretical contributions enhance the significance of our work, and to the best of our knowledge, these are the first theoretical results for inverse operator learning.\\n\\n2. Thank you for your valuable feedback regarding the comparison with the baseline. Solving forward and inverse problems using physics-informed methods, such as differential equations, is a relatively recent area of research. While NIO is a promising approach, a direct comparison with our model is challenging because it relies on fully supervised learning of the operator-to-function mapping. In contrast, our model can be trained without direct supervision on the target function, and to the best of our knowledge, there is no existing architecture that aligns with this approach. Nevertheless, we can also enhance our methodology by incorporating DeepONet\\u2019s advanced model into PI-DIONs\\nSince NIO utilizes DeepONet as a critical baseline, we selected the same model for comparison. If you could suggest relevant references addressing inverse problems with physics-informed methods, we would be glad to conduct further comparative experiments.\\n\\n3. The benchmark problems are widely used ones in physics-informed machine learning literature. In particular, even PINN approaches for solving the inverse source problem for the reaction-diffusion and the Helmholtz equations have been proposed very recently [2,3]. Our work extends recent works to the operator learning framework, where PI-DION rapidly predicts both the solution and target function from the partial measurements of the solution. Moreover, even in the absence of formal stability estimates, PI-DION consistently achieves accurate approximations for both solutions and target functions in the Darcy flow problem, suggesting its potential applicability to various differential equations. Furthermore, we have conducted additional experiments, including sensitivity analysis, detailed in Appendix C. Compared to PINNs, it offers significantly faster computation, while maintaining slightly larger but negligible relative errors.\\n\\n\\n[1] Roberto Molinaro, Yunan Yang, Bj\\u00a8orn Engquist, and Siddhartha Mishra. Neural inverse operators for solving pde inverse problems. 2023.\\n\\n[2] Hui Zhang and Jijun Liu. Solving an inverse source problem by deep neural network method with convergence and error analysis. 2023. Inverse Problems\\n\\n[3] Mengmeng Zhang, Qianxiao Li, and Jijun Liu. On stability and regularization for data-driven solution of parabolic inverse source problems. 2023. Journal of Computational Physics\"}", "{\"summary\": \"This paper proposes an architecture called Physics-Informed Deep Inverse Operator Networks (PI-DIONs), which can learn the solution operator of PDE-based inverse problems without labeled training data. The architecture of PI-DIONs is based on DeepONet, and trained with both the physics-infomred loss and data reconstruction loss. The stability estimates established in the inverse problem literature are extended to the operator learning framework. Experiments are conducted to demonstrate the effectiveness of PI-DIONs in learning the solution operators of the inverse problems without the need for labeled data.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The integration of physics-informed losses into an inverse problem framework based on operator learning is novel, and in principle PI-DIONs can solve the inverse problems (at least in scenarios mentioned in experiments) fast and without the need for labeled data.\\n2. Theoretical analysis of the stability estimates is provided.\", \"weaknesses\": \"1. Line 243, \\\"where the term \\u2225f \\u2212 f^\\\\star\\u2225L2(\\u2126m) in the righthand side\\\", there is no such term there. Please clarify the equation in line 242 and include all terms on the right-hand side of the equation.\\n2. It seems that the input to the reconstruction and inverse branch networks is fixed in shape, corresponding to the partial measurement with given geometry. The observed data in PINNs can have variable count and locations. Please discuss how PI-DIONs might be adapted to handle variable measurement geometries and if there are any limitations on the types of measurement setups it can handle. \\n3. In the experiments, PI-DIONs are compared with purely data-driven DeepONet and FNO, which both did not take physics information into account. If possible, please include comparisons with PINNs in the experiments, since both your PI-DIONs and PINNs are physics-informed methods for inverse problems.\\n4. The simultaneous training of physics-informed losses for 1000 samples is a difficult task (similar to train 1000 PINNs simultaneously). I am curious about the training difficulties encountered. Please provide specific details on training time, hardware used, and any convergence challenges encountered. If possible, please also include an ablation study on the effect of sample size on PI-DIONs' performance since smaller sample size may lead to easier optimization.\\n5. The theoretical analysis on stability estimate is extended from existing key results that considered the single element case. \\n6. Please provide a clear definition of u in line 152 and describe its relationship with partial measurement. In line 456, it is better to write \\\"f(x,y) = 100x(1 \\u2212 x)y(1 \\u2212 y) \\\", so does line 450. \\n\\nConsidering the above weaknesses, I give a score of 3 to the current version of this paper.\", \"questions\": \"1. DeepONet and FNO are used for forward problems traditionally, how did they deal with inverse problems in your experiments?\\n2. How is the labeled training target f mentioned in line 399 used? The loss for target f is absent in line 152.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Physics-Informed Deep Inverse Operator Networks (PI-DIONs) for solving PDE-based inverse problems without the need for labeled data. The paper extends existing stability estimates from inverse problem literature to the operator learning framework, ensuring the robustness and generalizability of PI-DIONs across the entire function space and domain.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper provides a solid theoretical foundation for the proposed PI-DIONs.\\n2. The proposed method demonstrates practicality and efficiency in addressing PDE-based inverse problems without the need for labeled data.\", \"weaknesses\": \"1. The contribution lacks novelty. The architecture relies on relatively simple components, such as CNNs and MLPs for the branch and trunk networks. It doesn't introduce significant advancements beyond well-established methods.\\n2. The baselines used for comparison, such as DeepONet and FNO, are somewhat dated. The paper would benefit from comparisons with more recent and state-of-the-art methods to better demonstrate the model's competitiveness.\\n3. The experimental evaluation is limited in range. Conducting experiments on a broader range of benchmarks would strengthen the validation of the proposed method's effectiveness across diverse problems.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks to the authors for the detailed response and additional experimental results! I have raised my score accordingly.\"}", "{\"metareview\": \"The work proposes a new DeepONet based architecture for learning the solution operator of PDE-based inverse problems. Stability estimates are established and the architecture is evaluated on several benchmark problems, showing dominating performance.\", \"additional_comments_on_reviewer_discussion\": \"While I agree with some of the reviewers that the architectural design does not contain much novelty and that the inverse problems considered are quite simple, I think the formulation and stability estimates as well as the dominating performance of the method over baselines merits publication of this paper. The authors have carried out numerous new ablations and introduced a way of handling irregularly gridded data into their methodology. I'd suggest the authors considering adding a more challenging example, but, even without it, I think establishes some very impressive numerical results.\"}", "{\"title\": \"Response to reviewer 3Fuq\", \"comment\": \"We sincerely appreciate your time and effort in thoroughly reviewing our rebuttal response. We are grateful for this second round of discussion and have made every effort to respond promptly, as the discussion period is nearing its deadline. For detailed explanations, please refer to each of our responses.\\n\\n**Q. The current architecture is a variant of DeepONet and lacks the flexibility to deal with sensor data with varying number and locations. This issue is of crucial importance for inverse problems, since in practical scenarios it is unreasonable to fix the number and locations of sensors in advance.**\\n\\nA. Thank you for your insightful comment regarding sensor data with varying numbers and locations. We agree that addressing this issue is crucial for extending the practical applicability of the proposed methodology. In the discussion section, we outline two variants of DeepONet that are designed to handle such variability, offering a potential direction for future work.\\nWhile this paper focuses on sensor data with fixed numbers and locations, PI-DIONs are the first models capable of rapidly approximating solutions and target functions for individual measurements. Once trained, PI-DIONs do not require retraining for newly provided measurements. Specifically tailored for function-to-function mapping, PI-DIONs can also be trained without direct supervision on the target function. To the best of our knowledge, no existing architecture aligns with this approach. To conclude, we believe that integrating ideas from recently proposed variants of DeepONets ([1,2]) could effectively address the concern you have raised. \\n\\n[1] Michael Prasthofer, Tim De Ryck, and Siddhartha Mishra. Variable-input deep operator networks. 2022.\\n\\n[2] Sung Woong Cho, Jae Yong Lee, and Hyung Ju Hwang. Learning time-dependent pde via graph neural networks and deep operator network for robust accuracy on irregular grids. 2024\\n\\n**Q. For the comparison with PINNs, the training times of PINNs in table 4 are as least 2 hours. In my own experiences, for the Reaction Diffusion equation, PINN takes much shorter time to converge on a 3090 GPU for a single instance. How many epochs did you use? And how many samples out of 1,000 did you use to get the accuracy of PINNs?**\\n\\nA. We sincerely appreciate the time and effort you invested in reviewing our manuscript. As you pointed out, there was an error in Table 5. This mistake arose from the selection of $\\\\lambda$, where the values of $\\\\lambda$ for PINN and PI-DION were different for the reaction-diffusion equation. Specifically, we initially reported the training time of PINN using $(\\\\lambda_1, \\\\lambda_2) = (1, 100)$ with 1e+7 epochs, which was intended for PI-DION. We have now corrected Table 5 to accurately reflect the results, including the number of epochs (1e+6). To evaluate the accuracy of PINNs, we selected a single sample from our dataset. For the reaction-diffusion equation, we just tried several random samples, and the results remained consistent. Once again, we appreciate your feedback in identifying this error.\\n\\n\\n**Q. Training 1,000 samples involves 2.000 loss terms, and you did not mention stochastic training using batches. Have you ever encountered convergence failure during training due to too many loss terms? You mentioned convergence issues in your future work, does it merely mean improving convergence rate?**\\n\\nA. We appreciate your insightful comment. Regarding stochastic training, we have added the following sentence in Appendix B:\\n\\n**All experiments were conducted on a single RTX 3090 GPU, with the batch size determined based on available memory. For the three experiments, we used either 1,000 or 500 samples per batch**\\n\\nWhile we are aware of the typical convergence failure scenarios in PINNs, such phenomena were not observed in our experiments. However, we did note that training was slow, despite the loss function steadily decreasing. This observation led us to highlight accelerating convergence as a potential area for future research. We have now revised the discussion section to make this point clearer.\"}" ] }
0Fi3u4RCyU
Evolve: Evaluating and Optimizing LLMs For Exploration
[ "Allen Nie", "Yi Su", "Bo Chang", "Jonathan Lee", "Ed H. Chi", "Quoc V Le", "Minmin Chen" ]
Despite their success in many domains, large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty. This is crucial as many real-world applications, ranging from personalized recommendations to healthcare interventions, demand that LLMs not only predict but also actively learn to make optimal decisions through exploration. In this work, we measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications. We develop a comprehensive suite of environments that include both context-free and contextual bandits of varying task difficulties to benchmark LLMs' performance. Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs: by providing explicit algorithmic guided support during inference; and through knowledge distillation via in-context demonstrations and fine-tuning, using synthetic data generated from these algorithms. Impressively, these techniques allow us to achieve superior exploration performance with smaller models, surpassing larger models on various tasks. We conducted an extensive ablation study to shed light on the different factors, such as task difficulty and data representations, that influence the efficiency of LLM exploration. Additionally, we provide empirical measurements on the convergence rate of different exploration strategies introduced.
[ "Large Language Model", "Exploration" ]
Reject
https://openreview.net/pdf?id=0Fi3u4RCyU
https://openreview.net/forum?id=0Fi3u4RCyU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z803JKj8bx", "rSeuBaWzl2", "nmf0nhkgTc", "mpwhcdhTEi", "lH94uBa6ef", "jwUxR9V5ST", "ePjm9zvdmi", "YetFzLIGJE", "UZ6pVkgRjr", "SohSfJEnfg", "M6UgYSexHb", "LmQOBLLYSv", "LYjHr0mNAb", "KK1Vkgzrcz", "HqYBB2N0Cz", "FjM8Tmsg44", "EWde8zsuPc", "Ca1SaDzPtP", "CMAmXmc5Ik", "BucS8DhII4", "Apenjznn3J", "A9onzbPY4W", "9DegtfYsbd", "93hE0f4VaV", "8r6jek3Wzs", "8fgyruFUMb", "8T7bWxZ2RH", "6tJpafJPP2", "2hTDOWl7AT", "09pCup8E11" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730164168827, 1732308391260, 1732003666543, 1732308323195, 1732466167159, 1737523986779, 1732003529098, 1732004601387, 1730194868976, 1733121400771, 1732377753315, 1732484128160, 1730541402479, 1732308442619, 1734912957529, 1733304186004, 1732003437024, 1730033716490, 1732004967030, 1732466182122, 1732465721412, 1732621506379, 1732367111544, 1732003702229, 1732010324966, 1732003786563, 1733119976304, 1733120298788, 1733220654783, 1732308374486 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9499/Reviewer_GjQX" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Reviewer_ZrAm" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Reviewer_GjQX" ], [ "ICLR.cc/2025/Conference/Submission9499/Reviewer_VCNG" ], [ "ICLR.cc/2025/Conference/Submission9499/Reviewer_3Uq3" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Area_Chair_aPbS" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Reviewer_VCNG" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Reviewer_3Uq3" ], [ "ICLR.cc/2025/Conference/Submission9499/Reviewer_VCNG" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Reviewer_ZrAm" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ], [ "ICLR.cc/2025/Conference/Submission9499/Reviewer_3Uq3" ], [ "ICLR.cc/2025/Conference/Submission9499/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This submission studies the problem of in-context exploration, where an LLM interacts with a bandit environment, and its history of observations and interactions with the environment are given in-context. The LLM agent then decides its next action based on this given context. Two forms of history are considered: raw history, in which the entire history is given in-context and summarized history, where summary statistics are pre-computed and given in-context instead.\\n\\nThe authors call their framework BanditBench. They consider both stochastic multi-armed bandit and contextual bandit instances. For multi-armed bandits, they consider two action descriptions: choosing between different videos and different clothes. They also consider two reward distributions: Gaussian and Bernoulli. For contextual bandits, they construct their instances from the MovieLens dataset. The MovieLens dataset contains 10,000 real users\\u2019 movie ratings. In the constructed contextual bandit instance, the goal is to recommend a personalized movie that the specific user seen at the current round will enjoy. The LLM is given textual features, as well as numerical features taken from a low-rank approximation of each user\\u2019s rating matrix as the context in each round. \\n\\nThe authors propose two mitigations to improve the exploratory behavior of LLMs in bandit tasks. Both methods leverage the behavior of optimal bandit algorithms. For the purposes of this submission, the optimal bandit algorithm considered is UCB for multi-armed bandits and LinUCB for contextual bandits. In inference-time algorithmic guided support (the authors\\u2019 first proposed mitigation), the LLM is given the explore/exploit components of UCB/LinUCB at each time step. (E.g. for UCB, this is the empirical average reward and the \\u2018exploration bonus\\u2019 for each arm.) For algorithmic distillation (the authors\\u2019 second proposed mitigation), UCB/LinUCB trajectories are given either in-context or via fine-tuning. \\n\\nThe authors empirically evaluate Gemma-2B, Gemma-9B, Gemini 1.5 Flash, and Gemini 1.5 Pro on 16 multi-armed bandit and 2 contextual bandit tasks. They compare the performance of different models via pariwise win rate. They find that, perhaps surprisingly, few-shot learning boosts Flash\\u2019s performance while hurting Pro\\u2019s. They also find that fine-tuning significantly improves performance over few-shot learning, and leveraging inference-time support significantly improves performance across all models. Various ablations are also performed.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"In-context reinforcement learning is an important and interesting problem, and multi-armed bandits & contextual bandits are an important building block in this direction. The authors propose several mitigations to improve the ability of LLMs to explore in these settings. Moreover, the paper is well-written and the multi-armed bandit experiments are comprehensive.\", \"weaknesses\": \"While the multi-armed bandit experiments are thorough, their novelty is somewhat limited as (as the authors point out), Krishnamurthy et al. 2024 study a very similar multi-armed bandit setting. While the multi-armed bandit results in this submission are more comprehensive, their findings are similar to Krishnamurthy et al.\\n\\nThe authors do include contextual bandit experiments (which are not present in Krishnamurthy et al.), but they are less comprehensive than the multi-armed bandit experiments. \\n\\nFinally, I am not fully convinced by the authors proposed mitigations. If we give LLMs things which make it easier for them to compute an upper-confidence bound, are we testing the LLMs\\u2019 ability to explore, or their ability to implement UCB? One reason why in-context exploration is interesting is because of the complex structure of real-world decision-making tasks. While it is natural to test LLMs\\u2019 exploration abilities on simple multi-armed bandit and contextual bandit tasks, we already have optimal algorithms for these domains and so deploying LLMs in such simple settings is not the end goal. Given that UCB is often suboptimal in structured bandit tasks beyond the two studied in this work, do you believe your proposed mitigations will extend to more complicated tasks?\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"As the author/reviewer discussion period getting close to an end (in 4 days -- Nov 26), we are wondering if 1) Our rebuttal addresses some of your concerns about the paper; 2) there is anything else we can do in the next 4 days to change your opinion/position on the current rating?\"}", "{\"title\": \"Response 1\", \"comment\": \"Thank you for your thoughtful review. We address your concerns point by point below:\\n\\n> In MAB, I would like to see a setting with variable sigma for each action, as the exploration problem for the LLMs might get easier when all of the actions share the same variance.\\n\\nThank you for your insightful question! In our current Gaussian domain environments from BanditBench, we vary task difficulty by adjusting the mean gaps between actions, following the approach outlined in [1]. This allows us to evaluate how different methods perform across both simple and complex domains, providing an initial understanding of their capabilities.\\n\\nWe completely agree that introducing variable variance across actions would add another layer of fine-grained difficulty, offering deeper insights into exploration behavior. However, due to the limited time during the rebuttal period, we were unable to complete evaluations for these tasks across all models (16 tasks, on top of 32 models = 512 evaluation runs). We plan to incorporate this environment into BanditBench and include it in the final version of the paper.\\n\\n[1]. Richard S Sutton. Reinforcement learning: An introduction. 2018.\\n\\n\\n> Why is OFT in Figure 2 present only for Gemini 1.5 Flash?\\n\\nDue to resource constraints, we chose to test the OFT idea with a single model. Flash was selected as it strikes a balance between model capability and computational efficiency, making it a practical choice for evaluating this approach.\\n \\n> Any idea how the LLMs perform in larger action spaces? I can imagine that many real-world applications go well beyond K=30, and any discussion on these scaling laws would be very helpful.\\n\\nThat\\u2019s a great question! Here is a performance breakdown on MAB (K=5 to K=20).\\n\\nWe performed an additional analysis by computing the model average win-rate on domains with K=5 and K=20 in the MAB experiment.\\n\\nRH shows Raw History.\\n\\nAG shows model with algorithmic guide.\\n\\n| | Flash + RH | Flash + AG | OFT Flash | Pro + RH | Pro + AG | UCB |\\n|------|------------|------------|-----------|----------|----------|-------|\\n| K=5 | 33.6% | 26.6% | 64.1% | 48.0% | 67.6% | 87.1% |\\n| K=20 | 21.9% | 37.9% | 67.2% | 43.0% | 51.6% | 94.1% |\\n\\nLarger action spaces (e.g., K=20) present greater challenges for all models, and we observe a notable performance drop for LLMs that rely on raw history. However, the techniques proposed in our paper, such as inference-time Algorithmic Guidance (AG) and oracle behavior fine-tuning (OFT), show increasing importance in these settings.\\n\\nThis suggests that while LLMs may struggle with scaling in raw-history setups, the enhancements explored in this work are particularly valuable for handling larger and more complex action spaces, making them essential for real-world applications with larger K.\\n\\n> Based on Figure 5, Gemma models perform terribly in exploration, even with all the techniques introduced in the paper. Do you have any explanation/hypotheses on why this is the case? Is it because of the model sizes?\\n\\nOptimal decision-making is inherently a challenging task, and smaller models, such as the Gemma models, often struggle to generalize beyond the tasks they were specifically trained on. This limitation has been observed in prior works, such as GSM-1K [2] and Symbolic-Math [3], where smaller models exhibit significantly degraded performance when faced with even slight variations in task structure.\", \"the_poor_performance_of_gemma_might_come_from_the_following_factors\": \"(1). Limited capacity for complex reasoning: as smaller models lack the capacity to perform complicated reasoning required for optimal decision making, i.e., calculating the exploitation values, exploration bonus, and figuring out the best way to combine them; (2). Difficulty with generalization: it seems smaller models are poor at adapting to new tasks, leading to poor exploration behavior; (3). Long-context window: smaller models often struggle to effectively extract and utilize information from long contexts, and this basically prohibits the effective exploration.\\n\\n[2] Zhang, Hugh, Jeff Da, Dean Lee, Vaughn Robinson, Catherine Wu, Will Song, Tiffany Zhao et al. \\\"A careful examination of large language model performance on grade school arithmetic.\\\" arXiv preprint arXiv:2405.00332 (2024).\\n\\n[3] Mirzadeh, I., Alizadeh, K., Shahrokhi, H., Tuzel, O., Bengio, S., & Farajtabar, M. (2024). Gsm-symbolic: Understanding the limitations of mathematical reasoning in large language models. arXiv preprint arXiv:2410.05229.\"}", "{\"comment\": \"Hi, thank you very much for your response! We really appreciate it.\"}", "{\"title\": \"Response 1\", \"comment\": \"Thank you so much for responding and engaging with us - we really appreciate it! We fully agree that evaluating and improving exploration capabilities with real-world decision-making tasks in mind is critical. We break down into the following points\\n\\n\\n**Bandit Tasks We Provide Already Captures Many Real-World Decision Making Scenarios**\\n\\nAs you pointed out, real-world decision making is complex, often involving unknown true value of each option (under different contexts). Bandits environments are exactly mathematical frameworks to study real-world decision making with uncertainty. Multi-arm bandits to study decision making with context-free and independent options, e.g., Duolingo app notification, UN job assistance program, which we included in the multi-armed bandits environments. Contextual bandits to study \\u201cstructured\\u201d decision making, e.g., news article recommendation, movie recommendation, which we included in the contextual bandit environment.\\n\\n| Decision-Problem | Bandit Abstraction | Reference |\\n|--------------------------------------------------------------------------------|---------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\\n| Duolingo App Notification | Multi-Armed Bandit | [Yancey, Kevin P., and Burr Settles. \\\"A sleeping, recovering bandit algorithm for optimizing recurring notifications.\\\"](https://research.duolingo.com/papers/yancey.kdd20.pdf) KDD-2020 |\\n| The United Nation Job Assistance for Jordanian Refugees | Multi-Armed Bandit | Caria, A. Stefano, Grant Gordon, Maximilian Kasy, Simon Quinn, Soha Osman Shami, and Alexander Teytelboym. [\\\"An adaptive targeted field experiment: Job search assistance for refugees in Jordan.\\\"](https://www.cesifo.org/DocDL/cesifo1_wp8535.pdf) Journal of the European Economic Association 22, no. 2 (2024): 781-836. |\\n| The United States Santa Clara County Court Text message reminder to court date | Multi-Armed Bandit | Chohlas-Wood, Alex, Madison Coots, Joe Nudell, Julian Nyarko, Emma Brunskill, Todd Rogers, and Sharad Goel. [\\\"Automated reminders reduce incarceration for missed court dates: Evidence from a text message experiment.\\\"](https://arxiv.org/abs/2306.12389) arXiv preprint arXiv:2306.12389 (2023). |\\n| Yahoo News Article Recommendation | Contextual Bandit | Li, Lihong, Wei Chu, John Langford, and Robert E. Schapire. [\\\"A contextual-bandit approach to personalized news article recommendation.\\\"](https://arxiv.org/abs/1003.0146). WWW 2010. |\\n| Netflix Movie Recommendation | Contextual Bandit | Bibaut, Aur\\u00e9lien, Maria Dimakopoulou, Nathan Kallus, Antoine Chambaz, and Mark van Der Laan. [\\\"Post-contextual-bandit inference.\\\"](https://arxiv.org/abs/2106.00418) NeurIPS 2021. |\\n\\n**Bandit Algorithms Produce Suboptimal Trajectories When Real-World Reward Model is Non-Linear**\\n\\nWe studied UCB-inspired algorithms to supplement LLMs for decision-making in multi-arm bandits and LinUCB-inspired algorithms for contextual bandits. There are also more recent works on NeuralLinear [1] which combine the representation power of deep neural networks and linear bandits for even more complex tasks where the reward has a non-linear dependency on the context, but the uncertainty estimate is mostly encapsulated in linear bandits again. These algorithms have been successfully verified in real-world decision-making, such as solving industrial-scale video recommendations [2]. Our framework already handles such real-world cases, and essentially, our CB experiments aim to simulate tasks like move recommendations.\\n\\n[1] Riquelme, Carlos, George Tucker, and Jasper Snoek. \\\"Deep bayesian bandits showdown: An empirical comparison of bayesian deep networks for thompson sampling.\\\" arXiv preprint arXiv:1802.09127 (2018).\\n\\n[2] Su, Yi, Xiangyu Wang, Elaine Ya Le, Liang Liu, Yuening Li, Haokai Lu, Benjamin Lipshitz et al. \\\"Long-Term Value of Exploration: Measurements, Findings and Algorithms.\\\" In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, pp. 636-644. 2024.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response 2\", \"comment\": \"> Could you provide further analysis to help guide the selection of the most appropriate method in practical applications? Besides, could you clarify the numeric similarities observed in Figure 4?\\n\\nWe addressed the numerical similarities above - the identical values reflect the win-rate of the same model referenced in different contexts.\\n\\nAs for data selection, for in-context few-shot demonstration, easier, shorter, and simpler examples with clear-cut decisions tend to be the most effective as few-shot examples. We hypothesize that these straightforward cases are easier for the model to understand, reason through, and replicate in-context. Conversely, for OFT, selecting more challenging examples can help mitigate overfitting to simpler patterns and yield greater improvements by encouraging the model to generalize better to complex scenarios.\\n\\n> How well do the proposed methods generalize to domains with much larger action spaces, such as real-world recommendation systems that involve thousands of items or more complex decision-making problems where exploration becomes more challenging due to the increased task size and complexity?\\n\\nThank you for your thoughtful question! The challenge of exploration indeed scales with the size of the action space, and as the number of actions in a system grows significantly, the efficiency of any algorithm, including our proposed methods and classical ones like Linear UCB, is expected to decrease. However, hierarchical contextual bandit approaches, as explored in works such as [1], offer promising strategies to address this issue.\\n\\nFor example, hierarchical methods can leverage item embeddings or other clustering techniques (potentially could utilize other large language models) to group items and construct a tree structure. Our algorithm can then be applied effectively at different levels of this hierarchy, improving scalability while maintaining performance. There are many open research questions to explore, such as how to effectively construct the hierarchical tree, determining the optimal level at which the exploration algorithm can be most effectively applied, and even the potential of developing an LLM-based agent that integrates all these aspects. We consider these to be exciting directions for future work.\\n\\nAdditionally, we conducted experiments to test our algorithm's generalization by training it on data collected from smaller action spaces and evaluating it on larger action spaces (i.e., easy to hard domain generalization). This analysis provides valuable insights into how well our approach scales and adapts to more extensive domains, such as real-world recommendation systems or other complex decision-making problems.\\n\\nHere, we show the performance of a fine-tuned Flash model on 5 arms and then evaluate on 20 arms, compared with the baseline:\\n\\n| | Flash | Flash + Few Shot (on K=5 with SH) | Flash + OFT (on K=5 with RH) |\\n|------|-------|-----------------------------------|-------------------------------|\\n| K=20 | 21.9% | 41.8% | 46.1% |\\n\\nWe see that by using few-shot examples from K=5 (a simpler domain) or doing oracle behavior fine-tuning, both can generalize to a harder domain where K=20.\\n\\nWe hope we have addressed your concerns and questions thoroughly, and we are happy to answer more questions if they arise. Thank you very much!\\n\\n[1] Show Me the Whole World: Towards Entire Item Space Exploration for Interactive Personalized Recommendations\"}", "{\"title\": \"Response 1\", \"comment\": \"Thank you very much for your thoughtful review. We are adding your two suggested citations to the paper! Thanks for bringing them to our attention. Here we address some of your concerns:\\n\\n> Lack of novelty in some of the contributions\\nWe thank the reviewer for acknowledging the contribution from BanditBench. We would like to address your concerns about the novelty on algorithmic guided inference-time support and algorithmic distillation approach in detail. Conceptually, there is a fundamental difference between \\u201cOptimal Behavior Fine-tuning\\u201d and Behavior Cloning, where we refer to the discussion in the next bullet point:\\n- On Behavioral Cloning (BC) and Optimal Behavior Fine-Tuning (OFT): we totally agree that Optimal Behavior Fine-tuning shares some similarities with behavior cloning in training objectives. However, BC trains on a single policy's sampled trajectories, while OFT trains on trajectories of a policy that is self-improving. To put it more bluntly, BC learns to mimic behavior from a **single, fixed** policy. OFT trains on trajectories sampled from **multiple policies**, each policy updated by a learning algorithm (such as UCB update or policy gradient).\\n- Similarly, while \\u201cin-context few-shot demonstration\\u201d is similar to in-context behavioral cloning, there are many design choices and open questions that significantly impact its effectiveness. For instance, what type of few-shot examples should be included to ensure better generalization to new environments? Should they come from simple or challenging domains? What representations should be used? These decisions play a crucial role in shaping how the model understands, reasons, and generalizes in new test domains.\\n\\nIn Section 5.3.2, we provide a comprehensive study and evaluation of these factors. We believe this analysis is highly valuable for advancing our understanding of how to effectively perform in-context exploration in LLMs.\\n\\n> In particular, the technique that the paper calls \\\"Optimal Behavior Fine-Tuning\\\" seems to be exactly what is known in the literature as Behavioral Cloning.\\n\\n> Is \\\"Optimal Behavior Fine-Tuning\\\" what is known in the literature as Behavioral Cloning? If so, please change the name in your paper. It can be confusing to a reader.\\n\\nWe acknowledge that OFT and Behavioral Cloning (BC) share many similarities. However, there is a fundamental distinction between the two. OFT is designed for algorithm distillation, focusing on capturing a sequence of self-improvement behaviors and generalization across any new test domains. In contrast, BC aims to learn a policy by mimicking a static policy, with no iterative improvement between trajectories.\\n\\nAlthough both approaches rely on maximum-likelihood learning, their goals are different: OFT seeks to encode dynamic, iterative refinement processes, while BC focuses on replicating static behavior.\\n\\n> Can the applicability of BanditBench be extended to other decision-making scenarios beyond bandit settings? Can you add some discussion about it in the paper? \\n\\n> I feel like recently LLM agents in more complex domains such as MDPs are very relevant and may be very useful in many real-world applications.\\n\\nThank you for the thoughtful suggestion! Extending BanditBench to decision-making scenarios beyond bandit settings, such as MDPs, is a natural and exciting direction. However, this transition introduces additional complexities that warrant careful consideration:\\n\\n- **Choice of Optimal Algorithm**: In MDPs, only tabular setups have provably efficient exploration algorithms. It would be interesting to investigate whether incorporating in-context few-shot demonstrations from suboptimal algorithms could still provide performance gains over existing LLMs. This exploration could give us new insights into how LLMs can leverage sub-optimal strategies in more complex domains.\\n- **Interpretability of Exploration Behavior**: In bandit settings, self-improvement behaviors are relatively straightforward to define and analyze, with theoretical guarantees like worst-case upper bounds. These results allow us to derive functional forms, as discussed in Section 6, to interpret and measure an LLM's exploration behavior. In MDPs, this interpretability becomes more challenging.\\n\\nOur method can naturally be extended to MDPs by fine-tuning on any behaviors/data derived from the best algorithms in the literature. We are interested in exploring how this scales to more complex environments and whether it can provide meaningful improvements for in-context exploration. Additionally, we fully agree that rigorously understanding LLMs' exploration behavior in MDPs is both a critical and exciting direction for future research. We will include this discussion in the revised version of the paper. Thank you for highlighting this!\\n\\nWe hope our responses have clarified your concerns and addressed your questions. We are happy to answer more questions if they arise. Thank you very much!\"}", "{\"summary\": \"The authors develop the BanditBench benchmark, which evaluates LLMs' abilities to explore and converge to optimal actions through the multi-armed bandit framework. They comprehensively evaluate the suite of Gemma and Gemini 1.5 models and propose two techniques to boost the LLMs' exploration abilities further.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-structured and easy to read. It extends the idea of Krishnamurthy et al. (2024) to contextual bandits, which is an important step for many practical applications.\\n\\nThe LLM evaluation methodology is sound and uses the MovieLens dataset, which I find a good fit for LLM exploration. I especially like the functional interpretation in Section 6, which allows us to compare LLM exploration capabilities to the established bandit algorithms, which clearly shows the LLMs are (unsurprisingly) lagging behind. This gives the paper a much stronger position, not overselling its ideas and showing the areas needed for improvement.\\n\\nOverall, I think there are a lot of novel ideas, and provided the authors release the source code, the ICLR community can build on this.\\n\\n---\\nKrishnamurthy, Akshay, et al. \\\"Can large language models explore in-context?.\\\" arXiv preprint arXiv:2403.15371 (2024).\", \"weaknesses\": \"In MAB, I would like to see a setting with variable sigma for each action, as the exploration problem for the LLMs might get easier when all of the actions share the same variance.\\n\\nI find the MovieLens dataset very simplified if the maximum number of actions is set at K=30 (see questions).\", \"questions\": \"1. Why is OFT in Figure 2 present only for Gemini 1.5 Flash?\\n2. Any idea how the LLMs perform in larger action spaces? I can imagine that many real-world applications go well beyond K=30, and any discussion on these scaling laws would be very helpful. This may not be intuitive as we would need to deal with issues such as limited context window and whether LLM can correctly synthesize the information from larger contexts.\\n3. Based on Figure 5, Gemma models perform terribly in exploration, even with all the techniques introduced in the paper. Do you have any explanation/hypotheses on why this is the case? Is it because of the model sizes?\\n4. How practical is it to use LLMs for such explicit exploration? If you have explicit actions, it seems easier to use RAG with UCB/Thompson Sampling baked into the external retrieval system, resulting in optimal exploration.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you so much for the engagement from earlier, especially for requesting that we address your last question fully. The response we posted will be included in the final paper, and we appreciate your effort to help us increase the clarity and substance of the paper!\\n\\nTo summarize, we believe our design of AG is domain-general -- we can use NeuralLinear for complex contextual bandit domains and various RL algorithms for sequential decision-making domains. Evaluating MAB and CB is sufficient because our setup encompasses many **real-world** scenarios, such as movie/news recommendations, refugee policy, app notification, etc. We welcome future work on specific domains such as games, robotics, and others.\\n\\n**As the discussion period is ending (tomorrow), we hope we have answered your question. Thank you so much for the thoughtful review. We really appreciate it.**\"}", "{\"comment\": \"Thanks for your reply (and for the reminder), but I'm still not convinced regarding your reply to my questions. In particular, you didn't really answer my last question. (Given that UCB is often suboptimal in structured bandit tasks beyond the two studied in this work, do you believe your proposed mitigations will extend to more complicated tasks?) This is important because like I mentioned in my review, our goal (presumably) is not to see if LLMs can solve bandit tasks, but to evaluate/improve their exploration abilities in a general sense with real-world decision-making tasks in mind.\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thank you for the response.\\n\\nI believe the section you added can help the readers. \\n\\nThanks for addressing most of the weaknesses I have highlighted. I have updated my score accordingly.\"}", "{\"summary\": \"This paper explores the ability of large language models to perform optimal decision-making under uncertainty through in-context exploration in multi-armed bandit and contextual bandit settings. This work introduces BanditBench, a comprehensive benchmark suite designed to evaluate LLMs in various bandit tasks. They propose two approaches to make use of bandit algorithms: (1) inference-time algorithmic guidance using established algorithms like UCB and (2) algorithmic distillation, where optimal behavior from algorithms is distilled into LLMs through few-shot demonstrations or fine-tuning. They also show the influence of different factors by conducting the ablation experiments.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper contributes to a relatively underexplored area by focusing on in-context exploration for LLMs in multi-armed bandit and contextual bandit settings. While LLMs are traditionally used for predictive tasks, this work broadens their application to optimal decision-making under uncertainty.\\n2. The introduction of BanditBench provides a structured benchmark for evaluating LLMs in decision-making tasks that require exploration and exploitation. \\n3. The proposed methods, including inference-time algorithmic guidance and algorithmic distillation, are well-motivated.\", \"weaknesses\": \"1. While the use of Summarized History (SH) and Algorithmic Guidance (AG) to enhance the exploration capabilities of LLMs is an intriguing direction, it is important to note that the results in Table 1 indicate that the application of AG in MAB scenarios does not yield consistent improvements and that its performance remains relatively low compared to traditional bandit algorithms (UCB, LinUCB). Additionally, employing AG introduces extra computational overhead. A more detailed discussion of the effects of AG would be beneficial for understanding its role more clearly.\\n2. The experimental analysis shows mixed results, especially in approaches for knowledge distillation with In-context Demonstration and Optimal Behavior Fine-Tuning for different model sizes and task difficulties. Specifically, in Figure 4, the results across various tasks and methods exhibit oddly similar numerical values (e.g., 0.487, 0.636, 0.267). A deeper investigation into the reasons behind these results could enhance the applicability of the proposed approaches in real-world scenarios.\\n3. The experiments are primarily focused on two specific domains (clothing and movie recommendations) with relatively small action spaces. It's unclear how well the proposed methods generalize to domains with much larger action spaces (e.g., thousands of items in real-world recommendation systems) or other decision-making problems where exploration could be more challenging due to the size and complexity of the task.\", \"questions\": \"Please see the weakness part.\\n1. Given that the results in Table 1 suggest that the use of Algorithmic Guidance (AG) does not lead to consistent improvements in MAB scenarios, could you provide further insights into the specific conditions under which SH and AG are most effective (especially compared with UCB or LinUCB)? \\n2. Since the results in Figure 4 indicate that in-context demonstration performs better in some cases (e.g., Bernoulli Video and Summarized History) while fine-tuning is more effective in others (e.g., Bernoulli Clothes and Raw History), could you provide further analysis to help guide the selection of the most appropriate method in practical applications? Besides, could you clarify the numeric similarities observed in Figure 4?\\n3. How well do the proposed methods generalize to domains with much larger action spaces, such as real-world recommendation systems that involve thousands of items or more complex decision-making problems where exploration becomes more challenging due to the increased task size and complexity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"As the author/reviewer discussion period is getting close to an end (in 4 days -- Nov 26), we are wondering if our rebuttal addresses some of your concerns about the paper.\"}", "{\"metareview\": \"This paper explores the ability of large language models to perform optimal decision-making under uncertainty through in-context exploration in multi-armed bandit and contextual bandit settings. This work introduces BanditBench, a comprehensive benchmark suite designed to evaluate LLMs in various bandit tasks. They propose two approaches to make use of bandit algorithms: (1) inference-time algorithmic guidance using established algorithms like UCB and (2) algorithmic distillation, where optimal behavior from algorithms is distilled into LLMs through few-shot demonstrations or fine-tuning. They also show the influence of different factors by conducting the ablation experiments.\\n\\nWhile the reviewers and the AC appreciate the new benchmark and the algorithmic contributions. There are two main concerns: (1) limited novelties compared to prior work, (2) lack of in-depth analysis of the proposed approach. The AC agress with these concerns and thus recommends rejection.\", \"additional_comments_on_reviewer_discussion\": \"There are two main concerns: (1) limited novelties compared to prior work, (2) lack of in-depth analysis of the proposed approach. These concerns were not fully addressed in the rebuttal.\"}", "{\"comment\": \"Thank you for the suggestion! We would like to clarify that the objective of our work is to assess and enhance LLMs\\u2019 inherent exploration capabilities in decision-making tasks. AG is one of the inference-time techniques we explored, but it is not the only contribution of our paper. We are interested in whether few-shot demonstrations and algorithmic distillation can enhance LLMs\\u2019 exploration capabilities, and injecting UCB knowledge (AG) is one approach we employed. We conducted a comprehensive analysis to study the benefits and trade-offs of different methods. We are grateful for the reviewer's suggestion, and we are updating the paper to make this more clear.\"}", "{\"title\": \"Response 1\", \"comment\": \"Thank you for your thoughtful review. We address your concerns point by point below:\\n\\n> AG in MAB scenarios does not yield consistent improvements and that its performance remains relatively low compared to traditional bandit algorithms (UCB, LinUCB). Additionally, employing AG introduces extra computational overhead. A more detailed discussion of the effects of AG would be beneficial for understanding its role more clearly.\\n\\nThank you for your insightful comment! You\\u2019re absolutely right that for smaller models like Gemma-2B and 9B, the impact of AG is minimal and can even be negative in some cases. However, for larger models, we observe significant gains even in simpler setups like MAB, where Flash improves from 26.9% \\u2192 31.3% and Pro jumps from 44.1 \\u2192 57.8 - a substantial improvement of 13.7%. The improvements are even more pronounced in complex scenarios like contextual bandits, where AG provides notable benefits during inference, as highlighted in Table 1. The point you raised is very valid \\u2013 with the additional computational cost, is AG worth it? Our findings suggest that in complex settings, such as contextual bandits, the performance boost offered by AG justifies the extra computational overhead.\\n\\n> Specifically, in Figure 4, the results across various tasks and methods exhibit oddly similar numerical values (e.g., 0.487, 0.636, 0.267). A deeper investigation into the reasons behind these results could enhance the applicability of the proposed approaches in real-world scenarios.\\n\\nThank you for your observation! To clarify, the identical win-rates of 0.487 in Fig 4(a) (Few-shot + Bernoulli Video k=5, \\u0394 Easy) and Fig 4(b) (Few-shot + Summarized History) are not coincidental - they both refer to the same model. This model uses few-shot examples from Bernoulli Video k=5, \\u0394 Easy and uses Summarized History as problem representation. Similarly, the identical value of 0.636 reflects the same scenario. Regarding the win-rate of 0.267 in Figure 4(c), we identified an error in our calculations. The correct win-rate for the Raw History OFT model should be 0.286. We have addressed this issue and updated the figures accordingly in the revised version of the paper. Thank you for bringing this to our attention!\\n\\n> Given that the results in Table 1 suggest that the use of Algorithmic Guidance (AG) does not lead to consistent improvements in MAB scenarios, could you provide further insights into the specific conditions under which SH and AG are most effective (especially compared with UCB or LinUCB)?\\n\\nThank you for your question. We observe consistent improvements when transitioning from raw history to Algorithmic Guidance (AG) in two key cases: (1) larger models like Flash and Pro, and (2) more complex scenarios, such as contextual bandits. As you noted, most real-world decision-making systems closely resemble contextual bandit frameworks. These systems often involve extremely large action spaces and typically rely on larger models to achieve optimal performance.\\nTo highlight the impact on (1) larger models, we conducted an additional analysis by calculating the average win-rate of the models across domains with action spaces of K=5 and K=20 in the MAB experiment. The breakdown of improvements with raw history (RH) versus Algorithmic Guidance (AG) across different numbers of actions is shown below:\\n\\nRH shows Raw History.\\n\\nAG shows model with algorithmic guide.\\n\\n| | Flash + RH | Flash + AG | Pro + RH | Pro + AG |\\n|------|------------|------------|----------|----------|\\n| K=5 | 33.6% | 26.6% | 48.0% | 67.6% |\\n| K=20 | 21.9% | 37.9% | 43.0% | 51.6% |\\n\\nWe see that AG provided consistent help when the number of actions is large for both Flash and Pro models. We hypothesize that providing AG is crucial when the action space is large.\\n\\nTo illustrate (2). Complex scenarios, we observe a similar phenomenon:\\n\\n| | Flash + RH | Flash + AG | Pro + RH | Pro + AG |\\n|------|------------|------------|----------|----------|\\n| K=10 | 0.0% | 35.7% | 7.1% | 57.1% |\\n| K=30 | 0.0% | 57.1% | 7.1% | 71.4% |\\n\\nNote that win-rate is computed as a comparison between models. We are showing by adding AG, in harder tasks, we are seeing a larger relative ranking improvement of models.\"}", "{\"summary\": \"This paper examines the ability of large language models (LLMs) to perform decision-making tasks. In particular, it is focused on Multi-Armed Bandit (MAB) and Contextual Bandit (CB) problems. The paper introduces BanditBench, a benchmark suite for evaluating large language models in decision-making tasks within bandit environments. It also proposes two approaches to enhance LLM exploration: inference-time algorithmic guided support and algorithmic distillation through in-context demonstrations and fine-tuning using synthetic data generated from optimal algorithms. Results show interesting behavior of LLM-agents in bandit tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Addresses an important area of LLMs in decision-making tasks: this paper faces a very timely topic. LLM agents are an important research direction that has recently seen a surge in popularity. New research in this area is fundamental in order to better understand the behavior of LLMs when they face decision-making problems under uncertainty.\", \"New benchmark: The paper introduces BanditBench, which is a novel benchmark for evaluating LLM exploration abilities. A benchmark in this research area is fundamental. Many papers in this area have different experimental settings. This makes it hard to compare them and for the whole research community to make reliable progress. For this reason, a benchmark on LLM agents is fundamental.\", \"Empirical evaluation: The paper also conducts comprehensive empirical evaluations and ablation studies on the proposed benchmark. I think that these results are interesting for the research community.\"], \"weaknesses\": [\"Lack of novelty in some of the contributions: While I believe that BanditBench is a great contribution, the other claim of this paper is: \\\"[...] we propose methods to enhance LLM\\u2019s decision-making capability by leveraging optimal algorithms, including algorithmic guided inference-time support and algorithmic distillation approach\\\". The proposed approaches, however, seem to lack of novelty.\", \"In particular, the technique that the paper calls \\\"Optimal Behavior Fine-Tuning\\\" seems to be exactly what is known in the literature as Behavioral Cloning. \\\"In-Context Few-Shot Demonstration\\\" instead is a sort of in-context behavioral cloning.\", \"Did not influence the score, but I feel that it may be useful to the readers:\", \"Related work: In this paper, the authors analyze LLM agents' performance in decision-making and how they deal with uncertainty and exploration. There are some recent papers in this area that feel very relevant:\", \"Controlling Large Language Model Agents with Entropic Activation Steering, Rahn et al., arXiv 2024. This paper investigates exactly the bandit scenario with LLM agents and tries to improve exploration with activation steering using the entropy at the representation level.\", \"On the Importance of Uncertainty in Decision-Making with Large Language Models, Felicioni et al., TMLR 2024. Also this paper studies LLM agents in the (contextual) bandit scenario, but it does it by creating a new final layer on top of the pre-trained LLM and uses various approaches to approximate the Bayesian posterior to implement Thompson Sampling and improve the exploration capabilities of the LLM agent.\"], \"questions\": [\"Is \\\"Optimal Behavior Fine-Tuning\\\" what is known in the literature as Behavioral Cloning? If so, please change the name in your paper. It can be confusing to a reader\", \"Can the applicability of BanditBench be extended to other decision-making scenarios beyond bandit settings? Can you add some discussion about it in the paper (if you find some space, otherwise in the appendix)? I feel like recently LLM agents in more complex domains such as MDPs are very relevant and may be very useful in many real-world applications. Notice however that I believe that a BanditBench is absolutely needed, even if it is a simplified MDP version, because it allows to analyze more carefully the exploration-exploitation trade-off in LLM bandits.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Shared Concerns\", \"comment\": \"We want to thank all reviewers again and use this space to address some concerns raised between reviewers:\\n\\n**1. Difference to Krishnamurthy et al. 2024 study**\\n\\nOur contributions extend well beyond merely enhancing the benchmark. We highlight four core differences:\\n\\n1. **More comprehensive benchmark**: We offer a more comprehensive benchmark by adding a Gaussian bandit setting for floating-point-based reward observation.\\n2. **Contextual bandit**: We extend the MAB setting to include contextual bandits, further evaluating the generalization capability of LLMs for exploration in more complex environments. Such a task requires LLMs to understand a user preference vector expressed in text.\\n3. **Algorithmic distillation techniques**: We delve deeper into developing effective methods for distilling optimal exploration behavior into LLMs, demonstrating the effectiveness of both in-context few-shot demonstration and optimal behavior finetuning. \\n4. **Extensive Ablation studies**: Our extensive ablation studies shed light on critical practical considerations, including the impact of task difficulty when selecting distillation examples and the importance of representation alignment.\\n5. **Regret analysis**: Furthermore, we offer a more rigorous analysis of the functional interpretation of LLM exploration behavior, providing a principled approach to measuring exploration efficiency. \\n\\n**2. Proposed Method\\u2019s Generalization to Complex Environments.**\\n\\nThe challenge of exploration indeed scales with the size of the action space, and as the number of actions in a system grows significantly, the efficiency of any algorithm, including our proposed methods and classical ones like Linear UCB, is expected to decrease. However, there are mitigations, such as using a hierarchical bandit approach.\\n\\nAdditionally, we conducted experiments to test our algorithm's generalization by training it on data collected from smaller action spaces and evaluating it on larger action spaces. This analysis provides valuable insights into how well our approach scales and adapts to more extensive domains, such as real-world recommendation systems or other complex decision-making problems.\\n\\nHere, we show the performance of a fine-tuned Flash model on 5 arms and then evaluate it on 20 arms, compared with the baseline:\\n\\n| | Flash | Flash + Fewshot (on K=5 with SH) | Flash + OFT (on K=5 with RH) |\\n|------|-------|----------------------------------|-------------------------------|\\n| K=20 | 21.9% | 41.8% | 46.1% |\\n\\nWe see that by using few-shot examples from K=5 (a simpler domain) or doing oracle behavior fine-tuning, both can generalize to a harder domain where K=20.\\n\\nIn addition to fine-tuning, we also saw our inference-time strategy AG provide a bigger performance increase when the environment becomes more complex (the action space grows larger):\\n\\n| | Flash + RH | Flash + AG | OFT Flash | Pro + RH | Pro + AG | UCB |\\n|------|------------|------------|-----------|----------|----------|-------|\\n| K=5 | 33.6% | 26.6% | 64.1% | 48.0% | 67.6% | 87.1% |\\n| K=20 | 21.9% | 37.9% | 67.2% | 43.0% | 51.6% | 94.1% |\\n\\nLarger action spaces (e.g., K=20) present greater challenges for all models, and we observe a notable performance drop for LLMs that rely on raw history. However, the techniques proposed in our paper, such as inference-time Algorithmic Guidance (AG) and oracle behavior fine-tuning (OFT), show increasing importance in these settings.\\n\\nThis suggests that while LLMs may struggle with scaling in raw-history setups, the enhancements explored in this work are particularly valuable for handling larger and more complex action spaces, making them essential for real-world applications with larger K.\\n\\nWe fully agree that rigorously understanding LLMs' exploration behavior in complex environments is both a critical and exciting direction for future research. Our proposed training-time and inference-time methods have demonstrated some ability to scale. We will include this discussion in the revised version of the paper.\"}", "{\"title\": \"Response 2\", \"comment\": \"**Suboptimal Exploration Trajectory in Sequential Decision Making Tasks**\\n\\nIf by \\u201ccomplex tasks,\\u201d you mean real-world sequential decision-making tasks like Atari games, robotic control, or navigation, we agree it is beyond the scope of this work. However, we conjecture that our conceptual idea of teaching LLMs using existing algorithms can be generalized to such settings. Empirical RL literature offers many existing algorithms for these scenarios [3]. Our approach suggests that leveraging existing RL algorithms - value-based [4] or policy-based [5] - to generate improved exploration trajectories (even if sub-optimal) and distill this behavior into LLMs should already improve upon the exploration capability of vanilla LLM. Our method is general and not domain-specific.\\n\\nTo the best of our knowledge, we are not aware of any public benchmarks in evaluating LLM\\u2019s decision-making capabilities in well-established real-world sequential decision-making tasks. Addressing this gap and improving LLM\\u2019s capabilities in such scenarios is one of the exciting future work.\\n\\n\\n[3] Osband, Ian, Benjamin Van Roy, Daniel J. Russo, and Zheng Wen. \\\"Deep exploration via randomized value functions.\\\" Journal of Machine Learning Research 20, no. 124 (2019): 1-62.\\n\\n[4] Haarnoja, Tuomas, Aurick Zhou, Pieter Abbeel, and Sergey Levine. \\\"Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.\\\" In International conference on machine learning, pp. 1861-1870. PMLR, 2018.\\n\\n[5] Schulman, John, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. \\\"Proximal policy optimization algorithms.\\\" arXiv preprint arXiv:1707.06347 (2017).\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response! Following your suggestion, we\\u2019ve updated the paper to include citations (with changes highlighted in red) and added a discussion in **Appendix Section A.4** to clarify the differences. You can see it by directly clicking on the PDF button. Please feel free to let us know if you have any further questions or concerns!\", \"we_include_the_discussion_section_below\": \"> Optimal Behavior Fine-tuning (OFT) and Behavior Cloning share many similarities. Although both approaches rely on maximum-likelihood learning, their objectives are different: OFT seeks to encode a dynamic, iterative refinement process, while BC focuses on replicating static behavior. OFT is designed for algorithm distillation, focusing on capturing a sequence of self-improvement behaviors, and generalization across any new test domains. In contrast, BC aims to learn a policy by mimicking a static policy, with no iterative improvement between trajectories.\\n\\n> This difference becomes very clear when we think of an example. We have a deterministic Markov policy $\\\\pi$ that we can use to create this dataset. We call this the sampling policy. To create a behavior cloning dataset, $D_{\\\\text{BC}}$, during dataset construction, for the same state $s$, the policy remains unchanged, which the means $\\\\pi(a|s)$ remains the same in the entire dataset. To create an algorithm distillation dataset $D_{\\\\text{OFT}}$, the sampling policy is self-improving as the data collection continues, $\\\\pi(a|s)$ changes even for the same $s$ between early and late trajectories of this dataset.\"}", "{\"comment\": \"Thank you for your detailed responses to the comments. I appreciate the clarifications you've provided, and I would like to follow up with a few additional questions to further understand your approach and findings:\\n\\n1. In the MAB scenarios, the performance of AG is lower than using SH across different model sizes (as shown in Table 1). Could you provide further analysis or insights on why this is the case, and whether there are specific conditions under which AG might outperform SH in these settings?\\n2. Regarding your explanation of Figure 4, it appears that Figure 4(a) compares (Bernoulli Video k=5, $\\\\Delta$ Easy) with **SH** and (Bernoulli Clothes k=20, $\\\\Delta$ Hard) with **RH**. Given this distinction, would it be possible that comparing the influences of difficulty of MAB tasks for Few-shot or OFT could introduce some unfairness? This might lead to potentially misleading conclusions from Figure 4. I would appreciate your thoughts on this point.\\n3. Lastly, I noticed some modifications in the revised version of the paper regarding the data in Figure 4 and Figure 2. Could you kindly explain the reasons behind these changes? Understanding the rationale behind the revisions would help clarify any discrepancies in the original results.\\n\\nThank you once again for your thoughtful responses, and I look forward to your further clarifications.\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the response.\\n\\nNow I think I got what you mean. With OFT, what you are doing is more of a sort of algorithm distillation: you are trying to teach the dynamic of an optimal exploration algorithm (such as UCB) rather than imitate the behavior of a fixed policy.\\n\\nNevertheless, we must acknowledge that OFT and BC share core similarities, such as the approach of supervised learning from demonstration. In my opinion, the authors should acknowledge BC in the paper and also provide clarification on the differences between OFT and BC, as they did in this discussion.\\n\\nIf the authors will insert this discussion in the paper, I will be willing to raise my score.\"}", "{\"title\": \"Response 2\", \"comment\": \"> How practical is it to use LLMs for such explicit exploration? If you have explicit actions, it seems easier to use RAG with UCB/Thompson Sampling baked into the external retrieval system, resulting in optimal exploration.\\n\\nThank you for the insightful question! For explicit actions, LLMs excel at capturing nuanced relationships among semantically meaningful actions, leveraging their extensive pre-training on diverse data. This effectively injects prior knowledge into exploration algorithms, which is particularly helpful in cold-start scenarios where initial information is limited.\\n\\nWe fully agree that combining RAG with UCB/Thompson Sampling offers a robust approach by integrating an exploration component directly into the system. However, our motivation in this work is to investigate whether \\\"exploration\\\" capabilities can be embedded directly within the LLM itself. In this study, we focus on explicit actions as a starting point. Our hope is that the learned \\\"exploration\\\" behavior can generalize beyond explicit actions to implicit ones\\u2014cases where predefined actions are not available or practical, such as the intermediate steps required for solving math problems or complex coding tasks. We leave this as exciting future work.\\n\\nThank you for the review again. Did we answer all of your questions? Happy to expand on these more!\"}", "{\"comment\": \"Thank you very much for answering all my questions! I am keeping my score.\"}", "{\"title\": \"Response 1\", \"comment\": \"Thank you for your thoughtful review.\\n\\n> Krishnamurthy et al. 2024 study a very similar multi-armed bandit setting. While the multi-armed bandit results in this submission are more comprehensive, their findings are similar to Krishnamurthy et al.\\n\\nWe acknowledge that Krishnamurthy et al. 2024 explored LLM\\u2019s in-context learning abilities in the MAB setting. However, our work makes several key advancements:\\n\\n1. A more comprehensive benchmark. For MAB, we add a Gaussian bandit setting, which requires LLM to process floating point numbers. Therefore, it assesses LLM for a different capability than the Bernoulli bandit in Krishnamurthy et al. 2024. We also include a broader range of tasks: evaluation with K=20 arms and two new scenario descriptions (video recommendation and clothes shopping).\\n\\n2. Contextual bandit: We extend the MAB setting to include contextual bandits, further evaluating the generalization capability of LLMs for exploration in more complex environments.\\n\\n3. Algorithmic distillation techniques: We also study efficient techniques to distill optimal exploration behavior, via few-shot in-context demonstrations and oracle behavior fine-tuning, which opened up the question about dataset selection (See Fig 4 (a)) \\u2013 we found that shorter, simpler examples are better as few-shot examples, but longer and harder examples are better for fine-tuning.\\n\\n4. Extensive Ablation studies: We conducted an extensive ablation study to understand how various factors, such as task difficulty and textual representation, influence the efficiency of LLM exploration. We also offer a regret analysis in Figure 5, characterizing the exploration capability with two fitted parameters alpha and beta. \\n\\nOverall, while Krishnamurthy et al. (2024) focused on smaller-scale MAB tasks, we offer a more thorough analysis across MAB and CB tasks at various scales. Furthermore, our contributions extend well beyond merely enhancing the benchmark. More importantly, we delve deeper into developing effective methods for distilling optimal exploration behavior into LLMs, demonstrating the effectiveness of both in-context few-shot demonstration and optimal behavior finetuning. Additionally, our extensive ablation studies shed light on critical practical considerations, including the impact of task difficulty when selecting distillation examples and the importance of representation alignment. Furthermore, we offer a more rigorous analysis of the functional interpretation of LLM exploration behavior, providing a principled approach to measuring exploration efficiency. \\n\\n> If we give LLMs things which make it easier for them to compute an upper-confidence bound, are we testing the LLMs\\u2019 ability to explore, or their ability to implement UCB?\\n\\n> Given that UCB is often suboptimal in structured bandit tasks beyond the two studied in this work, do you believe your proposed mitigations will extend to more complicated tasks?\\n\\nWe would also like to clarify that the objective of our work is to assess and enhance LLMs\\u2019 inherent exploration capabilities in decision-making tasks. Our goal is not to replace UCB with LLM in these simple settings. Instead, we aim to investigate whether LLMs can reason about readily available information and perform efficient exploration accordingly. This will establish a foundation for efficient exploration in complex tasks with explicit/implicit action spaces. Furthermore, we are interested in whether algorithmic distillation can enhance LLMs\\u2019 exploration capabilities, and injecting UCB knowledge is one approach we employed. While our current work focuses on simplified settings, it lays the groundwork for future research into more complex scenarios.\\n\\nWe hope our responses have clarified your concerns and addressed your questions. We are happy to answer more questions if they arise. Thank you very much!\"}", "{\"title\": \"Follow-up Response 1\", \"comment\": \"Thank you so much for the follow-up questions and for helping us bring more clarity to our paper.\\n\\n> In the MAB scenarios, the performance of AG is lower than using SH across different model sizes (as shown in Table 1). Could you provide further analysis or insights on why this is the case?\\n\\nIn MAB, the only difference between SH and AG is the exploration bonus introduced in the text. As shown in Table 1, AG is comparable with SH for larger models such as Gemini Flash and Pro, and it shows worse performance on smaller models like Gemma 2B. Here, we conducted further analysis with the behavior of AG on Gemma 2B based on your suggestion.\", \"there_are_two_types_of_failures_one_can_expect_in_a_bandit_problem\": \"1). **Over-exploration on suboptimal choices which results in lower exploration efficiency**: over-exploration happens when the algorithm spends too much time exploring suboptimal choices, reducing overall efficiency. This behavior can be quantified using the **MinFrac** metric (Krishnamurthy et al. 2024), which measures the fraction of pulls allocated to the least-selected arm. An ideal algorithm should exhibit high **MinFrac** during early exploration (when T is small) and low MinFrac as T increases (indicating effective exploitation).\\n\\n2). **Failure to identify the optimal arm**: this occurs when the algorithm struggles to converge on the best option over time. To capture this, we compute the percentage of times an optimal arm is pulled at different time steps (**OptFrac**). Ideally, this probability should increase as the process progresses, indicating the model's ability to self-improve.\\n\\nWe hypothesize that AG might show some worse performance on smaller models because the \\u201cexploration bonus\\u201d in text might lead LLMs to over-explore randomly \\u2013 this can be captured by a higher MinFrac value for AG than for SH, and a lower OptFrac value for AG than for SH.\\n\\nWe report these metrics over T time steps (for convenience of visualizing the result in a table, we choose the 10%-th step, 25%, 50%, 75%, 100%-th step / last step). For the brevity of this rebuttal, we focus on Clothes + Video, K=5, Hard, Bernoulli Bandit. We will provide a more comprehensive analysis in the paper.\\n\\n| MinFrac (Suboptimal Exploration) | SH | AG |\\n|----------------------------------|---------------------------|---------------------------|\\n| Gemma-2B | [0.0, 0.2, 0.1, 0.1, 0.1] | [0.0, 0.4, 0.2, 0.2, 0.2] |\\n\\nAdding AG leads Gemma-2B to pull less optimal arms 2 times more often compared to SH.\\n\\n| OptFrac (Optimality) | SH | AG |\\n|----------------------|--------------------------------|--------------------------------|\\n| Gemma-2B | [18.6, 19.4, 19.7, 19.9, 20.0] | [13.2, 12.8, 14.0, 14.3, 14.5] |\\n\\nWe see that the extra random exploration in AG does not lead to Gemma-2B to identify the optimal arm. Adding extra information causes over-exploration and confusion for smaller models.\\n\\n> whether there are specific conditions under which AG might outperform SH in these settings?\\n\\nWe hypothesize that AG helps in more challenging tasks where exploring diverse choices is critical, provided the model is sufficiently large to balance exploration with efficient exploitation without being misled by the exploration bonus discussed in the text. To explore this further, we analyzed the Gemini 1.5 Pro's performance on the harder domain (Clothes + Video, K=5, Hard, Bernoulli Bandit). Our findings reveal that, compared to SH, AG demonstrates significantly higher OptFrac and lower MinFrac.\\n\\n| MinFrac (Suboptimal Exploration) | SH | AG |\\n|----------------------------|------------------------------|------------------------------|\\n| Gemini-1.5 Pro | [38.1, 20.9, 10.9, 7.4, 5.6] | [35.8, 19.1, 9.7, 6.6, 4.9] |\\n\\n| OptFrac (Optimality) | SH | AG |\\n|----------------------|---------------------------|--------------------------------|\\n| Gemini-1.5 Pro | [4.6, 6.1, 6.7, 7.5, 8.2] | [15.8, 25.6, 32.3, 35.2, 36.8] |\"}", "{\"title\": \"Follow-up Response 2\", \"comment\": \"> Regarding your explanation of Figure 4, it appears that Figure 4(a) compares (Bernoulli Video k=5, \\u0394 Easy) with SH and (Bernoulli Clothes k=20, \\u0394 Hard) with RH. Given this distinction, would it be possible that comparing the influences of difficulty of MAB tasks for Few-shot or OFT could introduce some unfairness? This might lead to potentially misleading conclusions from Figure 4. I would appreciate your thoughts on this point.\\n\\nThank you for your question! To clarify, in our ablation study (Figure 4), three key components should be considered:\\n\\n1. **Training Data Domains**: These represent the data used for either few-shot demonstration or fine-tuning. For example, \\\"Bernoulli, Video, k=5\\\" serves as the representative of an easy domain, while \\\"Bernoulli, Clothes, k=20\\\" represents a hard domain.\\n2. **Summarization Methods**: The two methods compared are SH (for Few-Shot) and RH (for OFT).\\n3. **Evaluation Domains**: The evaluation is the same and consistent across all MAB tasks introduced in BanditBench, with the results measured as win rates over all models.\\n\\nIn Figure 4(a), specifically, the following four configurations are compared:\\n\\n1. **Few-Shot + Bernoulli Video, k=5, \\u0394 Easy** (training on the easy domain) + SH (summarization method) + evaluation across all MAB domains.\\n2. **Few-Shot + Bernoulli Clothes, k=20, \\u0394 Hard** (training on the hard domain) + SH (summarization method) + evaluation across all MAB domains.\\n3. **OFT + Bernoulli Video, k=5, \\u0394 Easy** (training on the easy domain) + RH (summarization method) + evaluation across all MAB domains.\\n4. **OFT + Bernoulli Clothes, k=20, \\u0394 Hard** (training on the hard domain) + RH (summarization method) + evaluation across all MAB domains.\\n\\nThe goal of Figure 4(a) is to analyze how the difficulty of tasks used in oracle trajectories influences the performance of the two methods (Few-Shot and OFT), while keeping other factors consistent. For fairness, SH is paired with Few-Shot and RH with OFT, as these summarization methods generally yield the best performance for their respective approaches.\\n\\nIt is important to note that the figure does not aim to directly compare Few-Shot with OFT. Instead, it focuses on how task difficulty impacts each method independently. This ensures a valid and fair comparison. Thank you for pointing this out! We will include a discussion of this aspect in the revised version of our paper.\\n\\nSimilarly, for Figure 4(b), our focus is on how different textualization or summarization methods in oracle trajectories influence the performance of the two approaches: in-context demonstration (Few-Shot) and OFT. The comparison involves the following four configurations:\\n\\n1. **Few-Shot + Bernoulli Video, k=5, \\u0394 Easy** (training data from the easy domain) + SH.\\n2. **Few-Shot + Bernoulli Video, k=5, \\u0394 Easy** (training data from the easy domain) + RH.\\n3. **OFT + Bernoulli Clothes, k=20, \\u0394 Hard** (training data from the hard domain) + SH.\\n4. **OFT + Bernoulli Clothes, k=20, \\u0394 Hard** (training data from the hard domain) + RH.\\n\\nAs in Figure 4(a), the comparisons are conducted within each method. For example, we compare SH versus RH for Few-Shot on the same task difficulty, and similarly compare SH versus RH for OFT. No cross-method comparisons (Few-Shot vs. OFT) are made. This ensures the conclusions drawn are valid and focused on how summarization methods impact each approach under consistent conditions.\\n\\n> Lastly, I noticed some modifications in the revised version of the paper regarding the data in Figure 4 and Figure 2. Could you kindly explain the reasons behind these changes? Understanding the rationale behind the revisions would help clarify any discrepancies in the original results.\\n\\nThank you for your question! Regarding Figure 2, the original draft version computed the win rate only over three groups: inference-time support, few-shot, and OFT. In the revised version, the win rate is calculated across all models, encompassing four groups: raw performance, inference-time support, few-shot, and OFT.\\n\\nThe inclusion of raw performance, which tends to be poor for Gemma 2B and 9B model trials, resulted in a slight increase in the win rates of other methods, but the general conclusion still holds. To ensure consistency and clarity, we decided to compute the win-rate using the same methodology across the entire paper (for all figures and tables), rather than including or excluding models based on specific figures/ablations. We hope this brings more clarity to the paper.\\n\\n**As the discussion period is coming to an end (tomorrow), we hope we have answered your question clearly. We really appreciate the additional effort you spent on carefully comparing our drafts and figures. Please let us know if you have additional thoughts and whether this response cleared up your confusion.**\"}", "{\"comment\": \"Thank the authors for their further response and additional experiments. While the new experiments address some of my concerns, based on the new experimental observations, I believe the paper may require further improvements to better demonstrate the specific effectiveness of the proposed method and provide more in-depth analysis to strengthen the contributions of the work.\\n\\nFor example, while the paper emphasizes the effect of AG on enhancing LLM exploration capabilities, the additional experiments show that AG's performance is only superior to SH in certain models and cases when under the MAB setting, which means that the AG method has a scope of application. It would be better to determine the application scope and discuss it in the introduction part to show the contribution clearly.\"}", "{\"comment\": \"As the author/reviewer discussion period getting close to an end (in 4 days -- Nov 26), we are wondering if 1) Our rebuttal addresses some of your concerns about the paper; 2) there is anything else we can do in the next 4 days to change your opinion/position on the current rating?\"}" ] }
0FbzC7B9xI
Improved Sampling Of Diffusion Models In Fluid Dynamics With Tweedie's Formula
[ "Youssef Shehata", "Benjamin Holzschuh", "Nils Thuerey" ]
State-of-the-art Denoising Diffusion Probabilistic Models (DDPMs) rely on an expensive sampling process with a large Number of Function Evaluations (NFEs) to provide high-fidelity predictions. This computational bottleneck renders diffusion models less appealing as surrogates for the spatio-temporal prediction of physics-based problems with long rollout horizons. We propose Truncated Sampling Models, enabling single-step and few-step sampling with elevated fidelity by simple truncation of the diffusion process, reducing the gap between DDPMs and deterministic single-step approaches. We also introduce a novel approach, Iterative Refinement, to sample pre-trained DDPMs by reformulating the generative process as a refinement process with few sampling steps. Both proposed methods enable significant improvements in accuracy compared to DDPMs, DDIMs, and EDMs with NFEs $\leq$ 10 on a diverse set of experiments, including incompressible and compressible turbulent flow and airfoil flow uncertainty simulations. Our proposed methods provide stable predictions for long rollout horizons in time-dependent problems and are able to learn all modes of the data distribution in steady-state problems with high uncertainty.
[ "physics-based simulations", "diffusion models", "improved sampling" ]
Accept (Poster)
https://openreview.net/pdf?id=0FbzC7B9xI
https://openreview.net/forum?id=0FbzC7B9xI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vd40NKmSD2", "sqG4FObkp5", "sYhnJbFoHz", "pr34H4eepY", "pjnuZUU21v", "olgtZ03TQc", "nk6frWVUQN", "mBxB1G0TTA", "gfRq72PAnL", "fBJDeFR2JM", "eXPXX0zpGl", "adOWGhmUH6", "Yq3QADSR64", "Y0BSKHweWk", "WjkkXOjJSU", "UOR7eOlSbP", "QmE7gazpbi", "PRPkn5oTLm", "OT4Nv7HUEw", "LQI8Lf2giB", "LBCrEWWXLs", "KpngoBoP3d", "HkixOk76PL", "GrScursuRA", "Cb0OYodhya", "BNb0EmnJRI", "5TxSCikX6c", "373Tm3RqJy", "2XNWXTWM0B", "1NuQyoDhyl" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment" ], "note_created": [ 1732827923485, 1732208867505, 1732534162910, 1730289472970, 1732208905628, 1730506324824, 1732208518124, 1732508370478, 1737523648114, 1732309600905, 1732209196937, 1732209553136, 1732658431209, 1732209598646, 1732828037977, 1732550004251, 1732539299016, 1732526329333, 1732656613061, 1732207700018, 1730111803484, 1732534049695, 1732210409516, 1730626612250, 1732533857287, 1733082725523, 1732208443102, 1730140955597, 1734553666115, 1732207733681 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Reviewer_8zxM" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Reviewer_PMEd" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Area_Chair_sG32" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4566/Reviewer_6cX6" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Reviewer_8zxM" ], [ "ICLR.cc/2025/Conference/Submission4566/Reviewer_6cX6" ], [ "ICLR.cc/2025/Conference/Submission4566/Reviewer_TSeh" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Reviewer_6cX6" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Reviewer_TSeh" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Reviewer_PMEd" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ], [ "ICLR.cc/2025/Conference/Submission4566/Reviewer_fSso" ], [ "ICLR.cc/2025/Conference/Submission4566/Area_Chair_sG32" ], [ "ICLR.cc/2025/Conference/Submission4566/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your earlier comments and insights. \\n\\nWe value your feedback greatly and want to ensure that our responses have addressed your concerns. To this end, we invite you to review the updated version of our paper, which includes new baselines, additional analysis, and improvements based on other reviewers' suggestions. If you have any further questions or additional feedback, we would be happy to address them.\\n\\nWe sincerely appreciate your time and valuable feedback in helping us refine our work.\"}", "{\"comment\": \"We thank the reviewer for their comments and we address their concerns below.\\n\\n**Superiority of the methods over other surrogates, including NOs.** We thank the reviewer for their comment on benchmarking against neural operators. As shown in Tables 6, 7, and 8, we systematically compared our methods with various deep learning baselines, including neural operators (namely FNOs) and UNets. Notably, our transient cases feature periodic boundary conditions, a scenario where FNOs are expected to excel due to their inherent architecture. In Table 6, we outline how our methods clearly outperform these baselines. Since we find from the $Tra$ case that FNOs yield suboptimal results, as corroborated by [1-3], we don't consider them for subsequent experiments. \\nFurther, our primary objective is to enhance the performance of DDPMs to reduce the gap between DMs and deterministic baselines; therefore, our methods don't always outperform UNets, especially when trained with advanced learning techniques. However, we demonstrated that TSM surpasses the best UNets (Table 1 left) while IR outperforms the baseline UNet (Table 1 right). In both cases, TSMs and IR surpass DDPMs in terms of speed and accuracy.\\n\\n[1] Benchmarking Autoregressive Conditional Diffusion Models for Turbulent Flow Simulation. In: ICML (2024).\\n\\n[2] Learned Simulators for Turbulence. In: ICLR (2022).\\n\\n[3] PDE-Refiner: Achieving Accurate Long Rollouts with Neural PDE Solvers. In: NeurIPS (2023).\\n\\n**Comment regarding the title.** We agree that the title may imply broader applicability than what is covered in our experiments. To address this, we will revise the title to emphasize our focus on fluid dynamics.\\n\\n**Effectiveness of the pre-trained DM in IR.** We thank the reviewer for raising an interesting point and *we assume that they mean $t = \\\\gamma_{i,k}$, where $k$ denotes the $k$-th element in a refinement schedule $\\\\gamma_i$.* The effectiveness of a pre-trained DM in IR sampling at any noise level $t$ lies in its ability to approximate the posterior mean $\\\\mathbb{E}[\\\\hat{x_0}| x_t]$ using Tweedie's formula, which directly relates to the likelihood estimation. In addition, we agree that in some cases the distribution $q(x_{init}|x_t)$ might not match the distribution $q(x_{0}|x_t)$ when using a pre-defined forward diffusion ($q(x_l|x_t)$ where $t > l$). Consequently, this places a limitation for the choice of $x_{init}$ to ensure that the forward process posterior estimation is optimal at any noise level $t$.\\n\\n**Comment regarding Eq. (6).** Eq. (6) is an idealization of a refinement schedule $\\\\gamma$ for which each step supersedes the accuracy of the previous step and by extension all the preceding ones. This is not a hard requirement for IR sampling, but rather a consequence of the greedy optimization algorithm we employ for $\\\\gamma$. While this doesn't guarantee that for the last refinement step the error will be smaller than a certain threshold $\\\\epsilon$ as this depends on several factors, it is possible to provide a convergence guarantee. Our IR method is closely related to DDIM, and thus can be interpreted via ODE integrators and guaranteed to converge given an optimized refinement schedule $\\\\gamma$. In fact, the recursive sampling formula for IR, as presented in Algorithm 2, can be recovered from the generalized generative process (see Eq. 7) that supports both Markovian and non-Markovian inference processes by using $\\\\sigma_t = \\\\sqrt{1-\\\\bar{\\\\alpha}_{t-1}}$. This essentially means that IR sampling is a special generative process similar to DDIMs (when $\\\\sigma_t = 0$) and DDPMs, which can be recovered with a proper choice of $\\\\sigma_t$. Thus, IR belongs to the non-Markovian family of generative processes, to which DDIM belongs.\\n\\n**Data consistency in IR.** We appreciate the reviewer\\u2019s suggestion regarding data consistency to improve the accuracy of IR sampling. Although it is possible to apply data consistency after line 7 in Algorithm 2, our experiments demonstrate that the proposed IR method achieves sufficient accuracy without requiring this additional step, thereby maintaining low computational cost. Enforcing data consistency with respect to $\\\\hat{x}_0$ would require ensuring that $\\\\hat{x}_0$ is a physically meaningful, non-noisy prediction of the flow field, which is exclusively achieved for noise steps close to 0 [4]. Additionally, to enforce data consistency, an auxiliary network or an expensive calculation of the governing PDE residual through high-order derivative approximations would be required as detailed in [5]. Thus, by not requiring data consistency in IR, the algorithm gains flexibility in selecting the noise steps for sampling (i.e., choosing the refinement schedule $\\\\gamma$) while still allowing for future extensions that incorporate data consistency, if desired.\\n\\n[4] Freedom: Training-free energy-guided conditional diffusion model. 2023.\\n\\n[5] A physics-informed diffusion model for high-fidelity flow field reconstruction. 2023.\"}", "{\"comment\": \"We greatly appreciate the reviewer\\u2019s thoughtful feedback and acknowledgment of the quality and relevance of our work. Our focus on fluid dynamics is intentional since fluid dynamics problems are prevalent and because we identify a gap where DMs have underperformed in this domain, primarily with respect to inference speed, as we outline in the paragraph starting at line 54. Therefore, our methods target improvements in this particular domain to reduce this gap by utilizing the unique data distribution characteristics of these simulations. We thus believe that our methods can generalize to other physics-based simulations beyond fluid dynamics if they exhibit similar distributions features. This can indeed be tackled in future studies. Moreover, we would like to emphasize that our contributions address a pressing need in temporal generative modeling and constitute meaningful progress for the machine learning and fluid dynamics communities alike.\\n\\nRegarding theoretical guarantees, we would be happy to investigate if error bounds can be derived from Tweedie's formula as suggested and share any useful outcomes in our final paper.\"}", "{\"summary\": \"This paper studies the application of diffusion models to physics simulations. Over the past years, neural networks have emerged as surrogate modeling approach for physics simulations, with a key use-case being computationally efficient inference. However, for this purpose, diffusion models have the drawback of requiring many function evaluations due to their iterative ancestral sampling procedure. To this end, the authors propose two contributions: (1) truncation of the last steps of the reverse diffusion process, and (2) iterative refinement, which considers a much shorter noise schedule at inference time. Both methods reduce the number of function evaluations and thereby increase sampling speeds. Moreover, the empirical results demonstrate that accuracy is generally maintained and sometimes even improved compared to standard expensive sampling procedures.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**S1:** The experimental setup is rigorous, comparing methods in both pointwise metrics and relevant physics-based metrics to provide complementary perspectives on their performance for three relevant datasets. Moreover, many additional results and baselines can be found in the appendix, making an extensive empirical evaluation overall.\\n\\n**S2:** Two methodological contributions are evaluated: truncation of the last steps of the reverse diffusion process, and iterative refinement, which optimizes the inference sampling schedule such that less denoising steps are required. Both contributions are aimed at reducing the number of function evaluations to improve computational complexity of diffusion models for physics simulations. This is a relevant research direction, since reducing computational complexity of computational procedures is one of the primary use-cases of neural simulation models, for which diffusion is emerging as a promising modeling approach. \\n\\n**S3:** The paper is well-written, and the explanations of the proposed algorithms are intuitive and easy to follow and understand. The clear structure of the text helps the reader to efficiently navigate the paper.\", \"weaknesses\": \"**W1:** While reading the text, I found it difficult to distill what the key differences between iterative refinement and PDE refiner are (Lippe et al., 2023). Does it have to do with the greedy optimization method of the refinement schedule (to my knowledge PDE refiner uses a fixed schedule), or details in the formulation of the diffusion process (a nonzero vs zero drift term in IR and PDE refiner respectively), or something else? Since both IR and PDE refiner are quite similar, it would be good if the \\u2018method novelty\\u2019 paragraph explicitly contrasts the two approaches and highlights their differences. Additionally, if the greedy optimization of $\\\\gamma$ is a core novelty relative to PDE refiner, then it would be beneficial to explain this more elaborately in the main text rather than the appendix, since it would be a key aspect of one of the contributions in this case.\\n\\n**W2:** One of the goals of the paper is to show that the proposed methods close the gap between the diffusion models and deterministic baselines. However, most of the results in the main text (both tables and plots) focus only on diffusion models. It would be relatively straightforward to also show the results of one or two deterministic methods that are considered by the authors in part of the plots and tables, for example in Figure 2 and Table 2. This would help the reader to get a better understanding of the tradeoffs of existing diffusion-based approaches, deterministic approaches, and the proposed methods without taking additional space in the paper.\\n\\n**W3:** The conditioning on the autoregressive step size (j in Sec. 2 of the paper) is already introduced in Gupta et al (2022), and as such cannot be claimed as a contribution of the paper (currently point 1 of the contributions listed in the introduction). Since this is not a core point in the rest of the paper, it seems that this can straightforwardly be removed from the list of contributions in the introduction without affecting the rest of the work and the core contributions significantly.\\n\\n**References:**\\n\\nGupta, J. K., & Brandstetter, J. (2022). Towards multi-spatiotemporal-scale generalized pde modeling. arXiv preprint arXiv:2209.15616.\\n\\nLippe, P., Veeling, B., Perdikaris, P., Turner, R., & Brandstetter, J. (2023). Pde-refiner: Achieving accurate long rollouts with neural pde solvers. Advances in Neural Information Processing Systems, 36.\", \"questions\": \"**Q1:** Given that PDE-refiner is conceptually relatively similar to iterative refinement, I am surprised to see a quite large performance difference between the two methods in Appendix D.1. Can the authors explain the reasons behind this large performance gap?\\n\\n**Q2:** The truncation of the last steps of the reverse diffusion process seems to be equivalent to a modification of the noising schedule: we can choose the noise schedule $\\\\beta_t$ such that the first step in the forward diffusion process has already a quite low signal-to-noise ratio (in line with the level corresponding to the last step before truncation in the reverse process), and afterwards noise is added gradually as per usual, while reducing the total amount of steps in the forward process in line with the skip percentage. Can the authors provide their thoughts on this perspective and whether or not they agree? If they agree, can they comment on why the noise schedule that is equivalent to the truncated process is a good choice for this problem setting relative to other problem settings, and in this way place their contribution in the broader context of noise schedules?\\n\\n**Q3:** Please comment on W1-W3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Part (1/4)\", \"comment\": \"We thank the reviewer for their thorough review and the invaluable feedback and suggestions. We address their comments and questions below.\\n\\n**W1 & Q1 - Generalization to other settings :** We thank the reviewer for their insights and suggestions. Regarding $Air_{multi}$, we see that our models at least maintain the accuracy of the baseline DDPM and significantly reduce NFEs from 100 down to 41 and 10 for IR and TSM, respectively. Thus, we believe that our proposed approaches are promising for application to datasets with complex distributions. \\nFurthermore, we agree that evaluating our approaches on systems with more complex nonlinear dynamics, such as the Kuramoto-Sivashinsky (KS) equation, would provide valuable insights, especially for capturing high-frequency components. While our work does not directly include KS, we address similar challenges in the $Tra$ case, where shock waves introduce sharp, localized discontinuities that correspond to high-frequency energy in the spectral domain, and the $Fturb$ case, which \\\"exhibits extreme events in the form of bursts in kinetic energy and dissipation\\\" [1]. The success of our methods in capturing these features suggests their potential applicability to similarly complex PDEs like KS, yet we still believe it would be interesting to evaluate the performance gains on this dataset and compare against benchmark results.\\nHowever, extending our approaches to KS or similar systems would require careful optimization of DDPM, IR, and TSM parameters, which is beyond the scope of this rebuttal. However, we appreciate the suggestion and would be happy to explore this in the final version.\\n\\n[1] Clustering-Based Identification of Precursors of Extreme Events in Chaotic Systems. In International Conference on Computational Science (pp. 313-327). Cham: Springer Nature Switzerland. (2023)\\n\\n**W2 & Q2 - Counter-intuitive comparison to DDPM.** While we agree with the reviewer that our improvements in terms of both speed and accuracy in some experiments might be surprising given the strong theoretical foundations for general DMs, our improvements arise from domain-specific adaptations. For this reason, we chose our experiments to exhibit different levels of difficulty and complex dynamics in both steady-state and transient scenarios to provide adequate empirical evidence for the efficacy of our approaches. \\nFurthermore, we believe that our baseline DDPMs are competitive and well-tuned for our datasets. In fact, in our methodology, we first optimize the best possible baseline DDPM in comparison to (benchmark) deterministic baselines, then we tune for $s$ in TSMs and \\\\{$\\\\gamma$, $x_{init}$\\\\} for IR sampling. Therefore, we are confident that our DDPM results are fair and representative. Additionally, we include a comparison against EDMs in our response below to **W5**.\\n\\n**W3 - Lack of theoretical guarantees.** We agree that deriving theoretical guarantees is challenging. Further, we provide practical heuristics that guide hyperparameter selection effectively across datasets for IR and TSM.\\n\\n- Deterministic test cases\\n - IR: we usually start with $x_{init} \\\\sim \\\\mathcal{N}(0,I)$ and run our greedy optimization algorithm with low N (with $N = |\\\\gamma|$) to obtain an efficient $\\\\gamma$ with low NFEs. As demonstrated in Figure 4(b), $N=5$ often yield very good results for transient cases. $N$ can then be gradually increased to explore other schedules that might improve the accuracy with little increase in NFEs.\\n - TSM: For higher speedup, the search for the optimum $s$ value typically begins within the range $[0.5, 1]$, especially if a large number of diffusion steps $T$ is chosen. We believe that Fig (3) provides several insights for the optimal combination of $s$ and $T$. We first test for both extremes of $s$ and then follow a standard line search approach to arrive at an optimum value for $s$, requiring additional 2 or 3 evaluations at most. We also give priority for models with low $T$ to minimize the NFEs required for inference.\\n- Stochastic test cases\\n - IR: $x_{init}$ is optimal when obtained through truncated sampling of a pre-trained DDPM with $s >= 0.5$. The output from this approach is usually highly accurate but exhibits clear noise; thus it usually takes $N < 5$ for $\\\\gamma$ to arrive at a clear output while improving or retaining the accuracy of the noisy input. \\n - TSM: We follow the same procedure as before; however, we refrain from extreme $s$ values. Therefore, our search is restricted to $s \\\\in [0.5, 0.9]$, or smaller lower bound for low $T$ values. \\n\\nWe will dedicate an additional section in our main text for an elaborate approach to optimizing the hyperparameters related to our proposed approaches. Moreover, we acknowledge that our experiments don't guarantee universal applicability; therefore, we will revise the generalizing statement to reflect the scope of our findings in the updated version of our paper.\"}", "{\"summary\": \"This article first analyzes DDPM and finds that after an appropriate truncation (stop diffusion), the model has high fidelity and high-efficiency sampling performance. On this basis, an iterative refinement method is introduced to further improve accuracy and long-term stability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"Advantages:\\n\\n1. As we all know, if we consider an infinite boundary heat equation (diffusion process), or consider long-term diffusion, then the usefulness of a long part of the noise addition/noise reduction behavior is not that great. The article fully considers and utilizes the diffusion behavior in a finite time (truncation), thus achieving a balance between accuracy and running time, which is a very good point.\\n\\n2. Truncated sampling reduces the uncertainty of the sample to a certain extent, or increases the accuracy of the sample response distribution.\\n\\n3. Based on the content of the appendix(in particular, part D), the experimental effect is very significant. In other words, iterative refinement even makes up for the truncated sampling to a certain extent (the useful information and samples that are truncated).\", \"weaknesses\": \"Disadvantages:\\n\\n1. I want to know whether there is a mathematical inference for the truncation sampling standard, or whether it is completely based on experience, that is, truncation and retention (importance) interpretability.\\n\\n2. The purpose of refinement iteration and truncation sampling is to improve efficiency while ensuring a certain degree of accuracy. I think this requires a game. How to achieve such a balance? Is there a more rigorous mathematical explanation?\\n\\n3. I think experiments can increase the breadth. One is to compare with a more general SDE instead of just with DDPM (and Kohl's 2024 work). In addition, for experiments, do you consider more general PDE solutions?\", \"questions\": \"The experimental results are very good, and I hope to add more theoretical analysis. I will be happy to improve my score in subsequent discussions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None. This is original work and there are no ethical issues\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank the reviewer for their thorough feedback and comments regarding our work, especially for their appreciation of the experimental setup and our approaches for improved sampling.\\n\\n**W1 & Q1 (comparison to PDE-Refiner):** We appreciate the reviewer\\u2019s comment and agree that explicitly contrasting IR with PDE-Refiner would strengthen the clarity of our method's novelty. Since PDE-Refiner shares a lot of similarities with DDPMs in general, it is expected to share key similarities with other models/samplers including, DDIM, EDM, and IR. We summarize the main differences between IR and PDE-Refiner in the following points:\\n- IR is a sampling algorithm that works with pre-trained DDPMs, while PDE-Refiner requires training from scratch for a fixed schedule and number of refinement steps, similar to DDPMs.\\n- PDE-Refiner predicts the state at the initial step, while IR focuses on predicting the noise throughout the entire refinement process, similar to ancestral sampling. \\n- The combination of a flexible $\\\\gamma$ and $x_{init}$ sets IR sampling apart from PDE-Refiner as it can be treated as a standalone sampling algorithm when $x_{init} \\\\sim \\\\mathcal{N}(0,I)$, but can also be used to refine a noisy or a low-fidelity state.\\n\\nFurthermore, we think that the main reason behind the poor performance by PDE-Refiner in the $Tra$ case is that it was found to be highly sensitive to its key hyperparameters (number of refinement steps and the minimum noise variance) as reported in Kohl et al. (2024). This makes an efficient hyperparameter tuning for this method difficult, leading to suboptimal results. \\n\\nMoreover, while the refinement schedule $\\\\gamma$ is one of the key novelties of IR, the greedy optimization algorithm is an established optimization approach; thus, we believe that the details regarding this algorithm are secondary to the main contributions which we include in the main text. \\n\\n**W2:** We thank the reviewer for their suggestion. While we do include a comprehensive list of baselines in our full set of results in Tables 6, 7, and 8, we will make sure to include the top-performing of these baselines in our Figures as well for the updated version of our paper. \\n\\n**W3:** We thank the reviewer for pointing out a similar approach adopted in Gupta et al. (2022) regarding our reformulated autoregressive problem. While it is true that conditioning on the prediction stride $j$ has been previously explored, the focus in the referenced study lies in evaluating models conditioned on multiple parameters, including $j$, across datasets. In contrast, our contribution emphasizes how this conditioning enables flexible sampling that supports parallelization and enables the prediction of intermediate states without compromising accuracy compared to next-step surrogates. Therefore, we will discuss the work of Gupta et al. (2022) in our revised manuscript and delineate the key differences between our objective and theirs. \\n\\n**Q2:** We appreciate the reviewer\\u2019s perspective on reinterpreting the TSM algorithm as a modified noise schedule with equivalent noise steps. For diffusion models in general, optimizing the noise schedule is a critical task that shall be tuned on a case-by-case basis whether through fixed schedules (e.g., linear, cosine, or sigmoid) [1] or learned ones [2]. While the proposed equivalence between truncation and a modified noise schedule might hold conceptually, in our experiments, we reported that noise schedules with large steps often produce noisy outputs, as shown in Tables 6 and 7 for the \\\"DDPM T20\\\" model. Therefore, adopting a custom schedule with the same noise steps as in TSMs without using Tweedie's formula for the last step at low SNR instead of an iterative refinement step is highly anticipated to result in suboptimal, noisy results. We believe that the ability to take a significant step from a low SNR state directly to a clean sample is facilitated by Tweedie's formula as it estimates the posterior mean rather than following the solution trajectory of the probability flow ODE/SDE. While this approach aligns conceptually with methods that use Tweedie's formula for final denoising (e.g., [3]), the novelty in our work lies in applying Tweedie's formula to significantly noisier states, enabling efficient sampling while maintaining accuracy.\\n\\n\\n[1] Ting Chen. On the Importance of Noise Scheduling for Diffusion Models. 2023.\\n\\n[2] Kingma et al. Variational Diffusion Models. NeurIPS 2021.\\n\\n[3] Score-Based Generative Modeling through Stochastic Differential Equations. In: ICLR (2021).\"}", "{\"title\": \"Reviewers' Response\", \"comment\": \"Dear Reviewers,\\n\\nAs the author-reviewer discussion period is approaching its end, I would strongly encourage you to read the authors' responses and acknowledge them, while also checking if your questions/concerns have been appropriately addressed.\\n\\nThis is a crucial step, as it ensures that both reviewers and authors are on the same page, and it also helps us to put your recommendation in perspective.\\n\\nThank you again for your time and expertise.\\n\\nBest,\\n\\nAC\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your clarifications! I\\u2019m looking forward to reviewing the revised manuscript. Below are a few additional comments in response to your replies:\\n\\n**W1 & Q1**: Understood. If time permits, exploring this in the final version would be very useful.\\n\\n**W2 & Q2**: Agreed\\u2014it makes sense, and I think you made the right decision to adjust the paper\\u2019s title to emphasise the focus on physics-based simulations more clearly.\\n\\n**W3**: This is very useful, especially given the heuristic nature of the approach. Including a clear \\\"recipe\\\" for tuning the hyperparameters would greatly enhance the usability of the method.\\n\\n**W4 & Q3**: Great, I think such a figure would be beneficial to showcase that the predictions remain physical. It would be even better if the figure plotted results over trajectory length or time steps, similar to Fig. 10 in Kohl et al.\\n\\n**W5**: This is interesting. While additional tuning might make the EDM results more consistent even for the more challenging datasets, I understand that this is beyond the scope of the current work. Still, the fact that your method achieves better performance with less complex fine-tuning is a valuable part of the contribution.\\n\\n**W6**: Regarding Fturb, are the test results based solely on two test trajectories (i.e., two initial conditions)? If so, wouldn\\u2019t it make sense to use a test dataset with more initial conditions to better assess the generalisation capabilities?\"}", "{\"title\": \"Part (2/4)\", \"comment\": \"**W4 & Q3.** We thank the reviewer for their suggestion regarding the stability for longer rollouts of our approaches. In addition to time-averaged metrics, in Fig. 7 (Appendix C), we show how each of the top-performing models from Table 2 correlate with the ground truth solution trajectory over time by calculating the Pearson correlation coefficient for the absolute velocity $\\\\rho (|u|)$ at each timestep $r$. Also, in Fig. 2 (b), we report $\\\\rho$ for the time evolution of the domain-wide kinetic energy, which describes the total kinetic energy of the system.\\nAdditionally, as requested, we include more analysis with regards to the temporal stability of our methods compared to traditional ones. We estimate the temporal stability for both $Tra_{long}$ (as defined in [2]) and $Fturb_{long}$ (combines both $ext$ and $int$ regions but with $R = 120$ instead of $R = 30$, more details will be included in an additional appendix for detailing all datasets) by calculating the rate of change of each flow field $x$ using $||(x_r - x_{r-1})/\\\\Delta \\\\tau||$ [2] and comparing against the ground truth simulation (GT). In summary, for $Tra_{long}$, our stability parameter remains bounded within the range $[0.015, 0.02]$ for both the ground truth (GT) and all models. While DDIM, DDPM, and IR slightly exceed the lower bound, they maintain overall stability. For $Fturb_{long}$, the stability parameter for the GT and all models is constrained within $[0.1, 0.15]$, with the exception of a minor deviation observed in DDPM. *The corresponding Figure will be included in our revised paper*. Contrary to $\\\\rho (|u|)$, this parameter is useful for long rollouts as it helps identify whether a solution trajectory remains physical even after no longer being correlated to the GT. These additional results show that our proposed methods exhibit temporal stability for simulation trajectories significantly longer than the trajectories used for training. \\n\\n[2]Benchmarking Autoregressive Conditional Diffusion Models for Turbulent Flow Simulation. In: ICML (2024).\\n\\n**W5 - Lack of a comparison to EDM.** As requested, we trained EDMs for three out of our four datasets, namely $Tra$, $Fturb$, and $Air_{One}$, and compare their performance against the top-performing models reported in Tables 1 and 2 (left). We use the same training hyperparameters (see Table 3) and network architecture as in Table 4. Our implementation of EDMs is based on the work by Karras et al. (2022). We implement their Algorithm 1 (deterministic sampler) and Algorithm 2 (Stochastic sampler) using 1st-order Euler and 2nd-order Heun's methods with design choices from the last column of Table 1 (Karras et al., 2022). We also consider preconditioning and a weighted loss function using the parameters recommended by the authors. For each case, we ran the model using different combination of deterministic/stochastic sampler and Euler's/Heun's method. For the stochastic sampler, the parameters $\\\\{S_{churn},S_{tmin}, S_{tmax}, S_{noise} \\\\}$ were non-comprehensively tuned to attain the best possible results. TA-MSE vs NFEs ($\\\\in [1,200]$) curves for these various samplers will be included in our revised manuscript.\\nIn the tables below, we provide the updated Tables 1 and 2 (left) with the additional best EDM model. We summarize these finds:\\n - $Tra$ - EDM outperforms all probabilistic and deterministic models with 4 NFEs only.TSM achieves nearly comparable accuracy while remaining the sole model capable of single-step inference. Given the negligible accuracy difference from the best-performing model, we hypothesize that a more accurately pre-trained DDPM could enable IR to achieve more competitive results with low NFEs.\\n - $Fturb$ - EDM outperforms DDPM and DDIM in both accuracy and NFEs but achieves comparable performance to TSM, despite being $10\\\\times$ slower. IR, however, remains the most accurate model for this dataset. \\n - $Air_{One}$ - EDM performs poorly in comparison to all other models. We believe the reason for this suboptimal performance is that the hyperparameters (including loss function weighting, scaling, and diffusion-related parameters) might not be optimal for this case and thus would require comprehensive tuning for improved results. Indeed, we believe that these author-recommended settings for EDMs are not expected to perform well in all applications (e.g., see [3]).\\n - In conclusion, EDMs are a great alternative to standard DDPMs, yet they don't provide significant improvements across different fluid dynamics problems compared to other baselines and our proposed approaches. We believe an interesting avenue for future research would be the combination of our proposed methods with EDMs to potentially enhance their performance and most importantly facilitate single-step inference.\\n \\nWe report these new findings in our updated manuscript.\\n\\n[3]Scaling Rectified Flow Transformers for High-Resolution Image Synthesis. In: ICML (2024).\"}", "{\"title\": \"Part (3/4)\", \"comment\": \"| **Model** | **NFEs** | **Tra ext ($10^{-3}$)** | **Tra int ($10^{-3}$)** | **Model** | **NFEs** | **Fturb ext ($10^{-2}$)** | **Fturb int ($10^{-2}$)** |\\n|----------------------------------------|---------|-------------------------|-------------------------|----------------------------------------|---------|---------------------------|---------------------------|\\n| ACDM T20 (Kohl et al., 2024) | 20 | 2.3 \\u00b1 1.4 | 2.7 \\u00b1 2.1 | | | | |\\n| $UNet_{ut}$ (Kohl et al., 2024) | 1 | 1.6 \\u00b1 0.7 | 1.5 \\u00b1 1.5 | Baseline | 1 | 3.95 \\u00b1 3.84 | 4.82 \\u00b1 5.23 |\\n| DDPM T100 | 100 | 3.0 \\u00b1 2.7 | 4.1 \\u00b1 3.7 | DDPM T80 | 80 | 6.16 \\u00b1 6.54 | 5.27 \\u00b1 5.55 |\\n| **EDM** | 4 | **1.3 $\\\\pm$ 1.3** | **1.1 $\\\\pm$ 1.0** | **EDM** | 10 | 4.75 \\u00b1 4.53 | 4.78 \\u00b1 4.26 |\\n| DDIM T20 | 10 | 3.2 \\u00b1 2.7 | 4.2 \\u00b1 3.9 | DDIM T80 | 40 | 4.31 \\u00b1 4.62 | 6.97 \\u00b1 6.84 |\\n| IR T100 - $\\\\mathcal{N}$ $\\\\gamma_1$ (Ours) | 5 | 1.6 \\u00b1 1.3 | 2.0 \\u00b1 1.7 | IR T80 - $\\\\mathcal{N}$ (Ours) | 10 | **2.93 \\u00b1 3.34** | **1.70 \\u00b1 1.63** |\\n| TSM T100 $s=1$ (Ours) | 1 | **1.2 \\u00b1 1.1** | 1.5 \\u00b1 1.5 | TSM T100 $s=1$ (Ours) | 1 | 5.00 \\u00b1 5.52 | 4.39 \\u00b1 4.76 |\\n\\nFor the $Air_{One}$ case:\\n\\n| **Model** | **NFEs** | **$(MSE_{{\\\\mu},y})_{a}$ ($10^{-4}$)** | **$(MSE_{{\\\\sigma},y})_{a}$ ($10^{-4}$)** |\\n|----------------------------------------|---------|------------------------------------------------------|------------------------------------------------------|\\n| DDPM T200C$^\\\\ast$ (Liu and Thuerey, 2024) | 200 | 3.79 \\u00b1 0.27 | 8.40 \\u00b1 0.69 |\\n| DDPM T200 | 200 | **2.88 \\u00b1 0.26** | 7.05 \\u00b1 0.22 |\\n| **EDM** | 20 | 8.13 \\u00b1 1.28 | 10.1 \\u00b1 1.67 |\\n| DDIM T200 | 100 | 3.68 \\u00b1 0.44 | 7.24 \\u00b1 0.25 |\\n| IR T100 - $s=0.6$ $\\\\gamma_\\\\text{5}$ (Ours) | 41 | **2.87 \\u00b1 0.32** | 6.76 \\u00b1 0.20 |\\n| TSM T100 $s=0.9$ (Ours) | 10 | 3.30 \\u00b1 0.39 | **5.89 \\u00b1 0.33** |\\n\\n\\n**Unclear experimental setup.** We agree with the reviewer that a more concise presentation of the experimental setup would improve clarity. We will dedicate an additional appendix to summarize the primary parameters for the different datasets for the sake of completeness.\", \"regarding_the_examples_mentioned\": [\"**Tra**: Yes, we use the exact same training trajectories as in the benchmark paper.\", \"**Tra**: For each $Ma$, there is a single trajectory with $R = 500$. However, each is split into 2 consecutive trajectories with $R = 60$, starting from $r = 250$ for the test regions *ext* and *int*.\", \"**Fturb**: For each $Re$, there are 240 trajectories (each starting from a different initial state). Each trajectory has 51 states (i.e., $R = 51$). However, during testing, we consider two trajectories of 30 states each for testing as we observed during early experiments that the predictions quickly de-correlate from the ground truth for longer rollouts.\", \"**Fturb**: True. $Re$ is a scalar condition provided as an input to the network as a 2D constant field, similar to how we handle $Ma$ for the *Tra* case.\", \"**Minor 1 - Comparison to DDIM.** We indeed agree that the dynamics of stochastic sampling is complicated in practice and that our statement might not reflect our intended meaning. Our statement is intended to reflect that IR should supersede DDIM based on the empirical observations from our experiments, which are focused on select fluid dynamics problems. Since the benefits of stochastic sampling are case-dependent, we will update our statement to reflect that trade-off between stochastic and deterministic approaches may vary across domains and thus doesn't guarantee that IR will always outperform DDIM.\"]}", "{\"comment\": \"Thank you for your additional comments and for recognizing the contributions of our work.\\n\\n**Number of test trajectories:** We have not conducted a deep study on the impact of varying initial conditions in the $Fturb$ case, but we agree that it is a valuable extension to further explore the accuracy of diffusion models on rollouts with different initial conditions. We will consider incorporating this analysis in our final paper. Thank you for this insightful suggestion.\\n\\n**Updated manuscript:** Thank you for pointing out these typos. The manuscript has been updated accordingly.\"}", "{\"title\": \"Part (4/4)\", \"comment\": \"**Minor 2 & Q4 - Lack of discussion on $j$.** The optimum $j$ value is an interesting finding in our results. While we don't see a clear pattern between the different models and experiments, we observe that the most accurate models for $Tra$ has $j_{optimal}\\\\leq4$ (with few exceptions) and $Fturb$ has $j_{optimal} \\\\in \\\\{2, 6\\\\}$. This variability suggests that the optimal $j$ depends not only on the specific surrogate model but also on factors such as timestep size $\\\\delta \\\\tau$, trajectory length $R$, and the inherent complexity of the dataset.\\nOur analysis remains focused on identifying optimal $j$ values for specific models and experiments. Importantly, these findings demonstrate that next-step sampling (i.e., $j=1$) is not always the most accurate choice, and larger strides can achieve competitive accuracy with enhanced parallelization. While we have not conducted a fully comprehensive study of $j_{optimal}$, we hope this work motivates further research into stride conditioning for faster and more accurate inference in spatio-temporal CFD problems.\\n\\n**Minor 3 - typos.** We thank the reviewer for pointing out these typos. We will carefully proofread the manuscript to address the noted errors.\\n\\n**Q5:** We thank the reviewer for pointing out this typo in Eq. 5. Indeed, the first argument for $p^T$ should always be $x_T$ while the second argument depends on the output of the previous DDPM iterative sampling function $p^T$.\\n\\n**Q6:** There are no differences between the settings in Table 5 and Tables 6 and 7. In Table 5, the TA-MSE values are averaged over both $ext$ and $int$ regions (as outlined in the caption of Table 5), while the results in Table 6 and 7, the TA-MSE values have been reported separately for each region.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your earlier comments and insights. \\n\\nWe value your feedback greatly and want to ensure that our responses have addressed your concerns. To this end, we invite you to review the updated version of our paper, which includes new baselines, additional analysis, and improvements based on other reviewers' suggestions. If you have any further questions or additional feedback, we would be happy to address them.\\n\\nWe sincerely appreciate your time and valuable feedback in helping us refine our work.\"}", "{\"comment\": \"Thank you for the rebuttal. Please find my response below:\\n\\n**W1 & Q1:** Thank you for the clarification on the differences between IR and PDE-refiner and updating this in the paper. \\n\\nWith regard to the greedy optimization, are you aware of similar approaches being used to optimize a sampling schedule for (pre-trained) diffusion models? If this is the case, it would help to provide a citation. If this is not the case, in my opinion the paper would benefit from giving this more prominence: although greedy optimization in itself is not new (obviously), I think the way it is applied in this context is non-trivial and could inspire future work. Of course, it is up to the authors to decide.\\n\\n**W2:** Thank you. I saw that you included those results in the appendix. Given that your contribution aims to \\\"reduce the gap between DDPMs and deterministic single step approaches\\\", I think it would be beneficial to put some attention on the top deterministic baseline(s) in part of the figures/tables in the main text. If I am not mistaken, at this point there is only the UNet in table 1 in the updated manuscript, but I trust that the authors will show deterministic baseline results if/where space permits.\\n\\n**W3:** Thank you for the explanation and adding this to the paper.\\n\\n**Q2:** Thank you for the explanation.\\n\\nSome reviews expressed a concern that the contributions are specialized towards fluid dynamics. Although I agree, in my view, this specialization is not a problem, as the ML community focusing on this area is growing. This is evidenced by many papers accepted in major ML conferences that have had a similar focus. Further, the scope of the contribution of this work is in line with what is expected from a paper in this area. \\n\\nMore importantly, this paper provides interesting and novel insights into sampling from diffusion models for fluid dynamics simulation. These insights come at the right time, as diffusion models have been gaining more traction for neural-network driven physics simulations over the last year or so. For these reasons, and with the rebuttal having alleviated my key concerns, I raised my score.\"}", "{\"comment\": \"**Number of test trajectories** Ok, I see. If the authors did not notice much difference between the results on 2 vs. 4 trajectories per $Re$, then it might be that in the case of Fturb there aren't any initial conditions that are \\\"harder\\\" to solve than others.\\n\\nThe reason why I am asking is because in other datasets (and I am going to return to the KS example here), the literature has reported that, depending on the initial conditions, some trajectories might be easier or harder to roll out from (see **Uncertainty estimation** in PDE-Refiner). Hence, in there it is crucial to make sure that the testing is performed on varied initial conditions.\\n\\nAs an extension to what you did, you could explore this with just one dataset (e.g. Fturb), and explore the uncertainties provided by the diffusion model. \\n\\n**Updated manuscript** Thank you for providing the updated manuscript, there are a couple of typos which I encourage the authors to resolve (e.g. L343 dataset comprises includes $u_x$, L704 informatioon, etc.). But otherwise, it looks good. I am looking forward to the experiments you will add if the paper gets accepted.\\n\\nThe majority of my concerns have been resolved and I think the paper is a good contribution to the community. While I agree with reviewer TSeh that the contributions are tailored to the fluid dynamics community, I also note that lately there has been an increasing interest in the machine learning community to tackle fluid dynamics modelling. As such, I will increase my score to 6, because I think the paper could be useful to ML researchers looking to speed up sampling in diffusion models aimed at physics-based simulation.\"}", "{\"title\": \"Reply to Authors\", \"comment\": \"Thank you for the clarifications and the detailed answers.\\nAs a general comment, I find the answers convincing about the quality of the paper. (I thank the authors for taking into account the recommendations for the title of the paper. This title reflects the content of the paper.)\\nHowever, I still think that paper's contributions are limited to specific flow field simulation improvements and I agree with Reviewer 6cX6 about the lack of theoretical guarantees. Providing theoretical convergence guarantees on both algorithms would have made the paper more impactful. For the moment, this paper could be a very nice a paper in any fluid dynamic journal but the machine learning contributions are slightly low for a machine learning conference. In my opinion, the authors should investigate the convergence properties (we might be able to derive some bounds from Tweedie's formula) and the potential applications to some other datasets. It is not far from a good paper for ICLR and the questions raised by this paper are very interesting.\"}", "{\"comment\": \"We greatly appreciate the reviewer\\u2019s thoughtful feedback and their recognition of our contributions.\\n\\n**Greedy algorithm:** Although we think that the choice of a simple greedy algorithm is highly effective in our experiments, it may not (always) be optimal. An alternative approach for optimizing the sampling schedule of a pre-trained DM is, for example, [Bespoke solvers]( https://arxiv.org/abs/2310.19075), which may yield better results compared to our approach. Nonetheless, we agree that highlighting the effectiveness of a simple algorithm in optimizing sampling schedules is crucial in our work. We will, therefore, consider emphasizing that in our final paper.\\n\\n**Top deterministic baselines:** Thank you for your suggestion. We agree that it is important to highlight the performance of our models in comparison to deterministic baselines in the main text. Currently, we include UNets in our tables as they represent the top-performing baselines among those considered, making them a strong and adequate reference for comparison. As suggested, we will explore the possibility of incorporating additional baselines into our tables and/or figures, within the permitted space, to provide a clearer comparison of the various models.\"}", "{\"title\": \"Reply (1/2)\", \"comment\": \"We thank the reviewer for their thoughtful comments and their interest in our approaches to reduce computational costs. We address their concerns below.\\n\\n**Use of Tweedie's formula and similarity to [Delbracio et Milanfar, 2024].** We thank the reviewer for highlighting the connection between our proposed approaches, leveraging Tweedie's formula, with [Delbracio et Milanfar, 2024]. As mentioned, the models are fundamentally different, yet the aforementioned work can still be linked to DDPMs when the low-quality, degraded state is pure Gaussian noise. Their main reason to add scaled white noise in every step of their inference algorithm is to convert a deterministic algorithm to a stochastic one that is capable of exploring multiple possible explanations for a degraded sample and thus lead to better perceptual quality, though their approach was shown to not be beneficial in all cases. Since both our approaches are inherently stochastic (i.e., fresh white noise is injected in every step), the potential gains from injecting further white noise to the predicted posterior mean might have little effect. Alternatively, we can enforce data consistency after applying Tweedie's formula to improve the overall accuracy of the methods similar to resampling methods (e.g., [1] and [2]). However, this approach comes with its own limitations as we described in the paragraph starting at line 284.\\n\\n[1] Solving inverse problems with latent diffusion models via hard data consistency. In ICLR (2024).\\n\\n[2] Improving Diffusion Inverse Problem Solving with Decoupled Noise Annealing. In: CoRR (2024).\\n\\n**Superior performance of TSM with $s=1$.** We would like to clarify that single-step TSMs (i.e., using $s=1$) don't always outperform ancestral sampling. $s$ is a hyperparameter that is to be tuned on a case-by-case basis and it is not always guaranteed that extreme $s$ values will always lead to the best results as they can significantly impact the stochasticity of the method. Since complex datasets generally benefit from stochastic samplers [3], we see that $s = 1$ is only possible on the $Tra$ (see Table 6) case while the optimum results for the cases $Fturb$ and $Air$ are obtained using $s = 0.75$ (see Table 7) and $s = 0.6$ (see Table 8), respectively. This is due to the increasing level of difficulty in these test cases, requiring more steps of the reverse Markov chain. Therefore, the trade-off between accuracy and stochasticity would limit the practicality of perpetually using $s = 1$. The observed results are specific to the nature of fluid dynamics simulations and thus, a generalization is only possible when considering the features of the data distribution (as we explain in the paragraph beginning at line 406).\\n\\n[3] Karras et al. \\u201cElucidating the Design Space of Diffusion-Based Generative Models\\u201d. In: NeurIPS (2022).\\n\\n**TSM performance in image reconstruction.** We appreciate the reviewer\\u2019s interest in benchmarking our approaches against traditional DDPM sampling on image reconstruction. However, our focus on fluid dynamics problems is intentional due to the unique properties of the data. Physics-based simulations involve deterministic and continuous systems governed by PDEs, which differ significantly from the more complex, often multimodal distributions encountered in generative modeling tasks. Our methods are tailored to exploit these characteristics present in our datasets to support fast and accurate sampling of DMs. Hence, comparing TSMs/IR against ancestral sampling for image reconstruction is beyond the scope of our research.\\n\\n**Models performance on noisy dataset.** Our primary objective from this study is to evaluate the effectiveness of our proposed methods in capturing the underlying dynamics of fluid systems without introducing additional complexities. We believe that handling noisy datasets is an independent, challenging task, potentially requiring methodological modifications to the training or the denoising algorithms. For example, existing studies are dedicated to exploring analogous challenges, including noisy [4] or sparse [5] observations. One simple approach is to include the conditional information (i.e., the previous noisy state) in the iterative refinement process similar to [6], but the efficacy of this method in reducing the impact of added noise and its ability to recover a noise-free observation remains uncertain. Thus, we believe that out-of-the-box diffusion models are not expected to excel on noisy datasets without careful algorithmic modifications, which is an interesting topic for future work.\\n\\n[4] Risk-Sensitive Diffusion: Robustly Optimizing Diffusion Models with Noisy Samples. 2024.\\n\\n[5] DiffusionPDE: Generative PDE-Solving Under Partial Observation. 2024.\\n\\n[6] Kohl et al. Benchmarking Autoregressive Conditional Diffusion Models for Turbulent Flow Simulation. In: ICML (2024).\"}", "{\"summary\": \"The paper addresses the high computational requirements of existing diffusion-based techniques for modelling dynamical systems. They propose two sampling techniques that lead to good sample quality with only a few NFEs. The first one requires modifications to the training process and is performed by truncating the diffusion process close to the clean data. The second one is compatible with pre-trained DDPMs and proposes an iterative refinement based on Tweedie\\u2019s formula. The paper provides extensive experimental evidence on three datasets: incompressible and compressible turbulent flow (2D) and airfoil flow simulation (3D).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Relevant topic.** Lately, there has been increasing interest in modelling dynamical systems with diffusion models due to their probabilistic nature. However, many works (Kohl et al. [1], Shysheya et al. [2], Lippe et al. [3]) acknowledge that diffusion models tend to be computationally costly, and techniques that reduce this cost would be greatly beneficial. This is exactly the problem this paper aims to address.\\n2. **Clear distinction from other works.** The paper clearly delineates its contributions from the already existing work, and how the proposed sampling techniques differ from other approaches.\\n3. **Good experimental evidence.** The paper provides good empirical evidence, with experiments on three diverse datasets, and using a wide range of metrics.\\n4. **Well-structured, clear writing.** Overall, I found the structure and writing clear.\\n\\n[1] Kohl, G., Chen, L., & Thuerey, N. (2023). Benchmarking Autoregressive Conditional Diffusion Models for Turbulent Flow Simulation.\\n\\n[2] Shysheya, A., Diaconu, C., Bergamin, F., Perdikaris, P., Hern'andez-Lobato, J.M., Turner, R.E., & Mathieu, E. (2024). On conditional diffusion models for PDE simulations.\\n\\n[3] Lippe, P., Veeling, B.S., Perdikaris, P., Turner, R.E., & Brandstetter, J. (2023). PDE-Refiner: Achieving Accurate Long Rollouts with Neural PDE Solvers. ArXiv, abs/2308.05732.\", \"weaknesses\": \"1. **Unclear how the method generalises to other settings.** I acknowledge that the aim of the paper is to study the efficacy of the proposed sampling methods in the context of dynamical systems. As the authors mention, the fact that they lead to good results is probably due to:\\n- The characteristics of the data distribution\\u2014states with fairly coarse resolutions, where the high frequency information has been lost during downsampling.\\n- The task considered\\u2014predicting next step (forecasting) distribution, which is predominantly unimodal.\\n \\n I\\u2019d be curious to know whether the sampling methods remain applicable \\n - for other tasks that might require modelling multimodal distributions: e.g. some sort of inverse problem (reconstruct field based on sparse, partial observations, compatible with multiple solutions), which is also a task of interest in dynamical system modelling, \\n - or for other datasets: potentially still PDEs but not as coarsened and with more complicated nonlinear behaviour, or images. \\n\\n One could argue that Air_multi requires sampling from a more complicated distribution, and there the improvements are not as pronounced as in other settings. Perhaps you could include a brief discussion on this, or a preliminary experiment on a more complex PDE dataset, such as Kuramoto-Sivashinsky (see **Q1**)?\\n2. **Counter-intuitive comparison to DDPM.** Maybe this is not necessarily a weakness, but I find it counter-intuitive that these sampling methods outperform DDPM, given that DDPM (or at least the continuous-time frame formulation) has stronger theoretical foundations (Similarly to how DDIM is posed as a framework where you can trade off computation for sample quality, not gain in both). I would have expected the same accuracy with significantly fewer NFEs to be possible, but not both. Is it possible that the DDPM model could be further tuned to achieve similar performance and maybe the DDPM baselines are not optimal? Maybe this is where a comparison with EDM would have been beneficial, as it provides a more principled approach of setting up diffusion models.\\n3. **Lack of theoretical guarantees.** While the methods seem effective in practice, the paper does not provide any theoretical guarantees. I realise this might be hard to derive, but this makes the applicability of the methods more ad-hoc. This also reflects in the empirical investigation, where there is no clear recipe for what works best and how one should choose the hyperparameters optimally.\\nAlso when making statements such as \\u201cHowever, we assert that\\nour methods will consistently improve over ancestral sampling, as demonstrated in our experiments.\\u201d - This was only shown in three experiments, and the paper does not contain guarantees that this would always hold, so I would avoid over-generalising.\\n4. **Lack of analysis of the stability of longer rollouts.** The paper provides several metrics to analyse the performance of the sampling methods, but none provides intuition/results about how the metrics evolve in time for the transient datasets (Tra and Fturb) (e.g. per-time-step MSE or per-time-step correlation). In particular, I\\u2019d be interested to compare the performance of the benchmarks to your methods on rollouts longer than what the model has been trained on. (This could, for instance, be tested on Tra_long from Kohl et al. [1]).\\n \\n I think that one potential weakness of these sampling schemes is that they lose more of the high frequency information than the traditional sampling schemes. Maybe this doesn\\u2019t affect the short rollouts significantly but it might negatively impact the accuracy of longer term rollouts (see Lippe et al. [3]). It would also be good to check that if extended significantly beyond the training range, the proposed sampling methods still generate physically plausible states, and outperform the baselines.\\n5. **Lack of a comparison to EDM.** While I agree that a comparison to, for example, distillation techniques is outside the scope of the paper, I think EDM is relevant as a baseline. The fact that it is \\u201cdesigned to handle more complex stochastic data with multimodal distribution\\u201d does not mean it is not relevant for cases that do not exhibit much stochasticity. And it shouldn\\u2019t incur a different computational cost at training time. I think a comparison to EDM would be very useful to the community to figure out what the fastest and most accurate way to set up diffusion models for dynamical systems is. If the methods proposed here outperform EDM, this makes the paper stronger. If they don\\u2019t, I think this would also be a valuable result, potentially implying that EDM is a robust technique (regardless of data distribution) that should be used as a \\u201cfirst thing to try\\u201d as opposed to spending time and resources on hyperparameter tuning of more ad-hoc techniques.\\n Maybe the authors could include a comparison to EDM on one or two experiments?\\n6. **Unclear experimental setup.** I found it hard to figure out some experimental details in certain places. I am aware these details exist in other papers (Kohl et al. [1]), but for ease of interpreting the results, it would help to include some more details, potentially as a brief table in the appendix which summarises the most important dataset characteristics. For example: \\n - Tra - What is the Mach number range of the training trajectories? Is it also Ma $\\\\in [0.53, 0.63] \\\\cup [0.69, 0.90]$ as in Kohl et al.?\\n - Tra - How many trajectories are there per Mach number?\\n - Fturb - I am slightly confused about the number of states within each trajectory for this dataset. You mention that each simulation contains 51 temporal states, but do you just consider 30 out of these for the test results? And what do you mean by \\u201cAR sampling is employed for $R = 30$ timesteps \\u2026 for two sequences per $Re$?\\u201d\\n - Fturb - you are feeding in the $Re$ number as conditioning information to the model as in Kohl et al., right?\\n\\n**Minor**\\n\\n7. **Comparison to DDIM (L306)** - You say that \\u201cIR should supersede the deterministic DDIM sampling regarding accuracy and NFEs\\u201d due to its stochastic nature. While I agree that stochastic sampling is beneficial because it can correct previous errors in sampling, Karras et al. [4] mention that, in practice, the situation is more complex because approximating the extra Langevin term introduces error in itself (see Section 4 Stochastic sampling). Thus, I would not say there is any guarantee that IR would supersede DDIM in all scenarios.\\n8. **Lack of discussion on $j$.** I find it interesting that the optimum $j$ value varied so much between methods, as shown in Tables 6 and 7, but you do not include any discussion about this. Do you have any insight about why this might be the case?\\n9. **Small typos**, such as L291 \\u201cto be evaluated\\u201d, L776 \\u201cpresnted\\u201d, L772 \\u201cwhere\\u201d missing at the beginning of the line, L819 \\u201callow\\u201d rather than \\u201callows\\u201d, L840 \\u201care not needed\\u201d etc.\\n\\nOverall, I think the paper is clearly written and structured, and generally presents convincing empirical evidence. However, I think its quality could be significantly improved by including experiments on longer rollouts, a comparison to EDM, and potentially clarifying the regimes in which the techniques are effective (with the inclusion of negative results if necessary).\\n\\n[4] Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the Design Space of Diffusion-Based Generative Models. ArXiv, abs/2206.00364.\", \"questions\": \"1. **Regarding W1** - I\\u2019d be curious to see how the methods perform on dynamical systems with more complicated nonlinear dynamics, for example the Kuramoto-Sivashinsky equation used in PDE-Refiner (Lippe et al. [3]). That is a fourth-order non-linear equation where correctly capturing the high frequencies seems to be more important than in other settings, so it would be interesting to see how you compare there with benchmarks such as PDE-Refiner [3] (which can also be interpreted as a refinement of an MSE-trained one-step prediction).\\n2. **Regarding W2** - Do you have any intuition why these methods outperform DDPM and whether the DDPM baseline could be improved?\\n3. **Regarding W4** - Could you provide some experiments that test the long rollout performance of these sampling methods vs. traditional ones? Including frequency spectra of generated states would also help.\\n4. **Regarding Minor 2** - It seems that in general the optimum $j$ for these methods is between [2, 4], but have you noticed any significant patterns? Were there differences between interpolation and extrapolation tasks?\\n5. **Eq (5).** I am not sure I understand this equation. I will omit the $\\\\theta$ in the $p_{\\\\theta}$ subscript because the equations do not render correctly. If $p^T(x_T, \\\\mathbf{x}_0, j) = \\\\mathbf{x}(j \\\\cdot \\\\delta t)$, then wouldn\\u2019t $\\\\mathbf{x}(2j \\\\cdot \\\\delta t) = p^T(x_T, \\\\mathbf{x}(j \\\\cdot \\\\delta t), j) = p^T(x_T, p^T(x_T, \\\\mathbf{x}_0, j), j)$ (i.e., we still start from white noise $x_T$, but we now condition on the output of the previous step)? In my mind $\\\\mathbf{x}(\\\\tau_f) = p^T(x_T, p^T(...p^T(x_T, \\\\mathbf{x}_0, j)...), j)$, but maybe I didn\\u2019t interpret the equation correctly.\\n6. When comparing the results in Table 5 vs. Tables 6 and 7 - Shouldn\\u2019t the metrics corresponding to, for example, Fturb TSM T80 s =0.75 ($j=2$) from Table 5 ($3.63 \\\\pm 1.95$) be the same as in TSM T80, s=0.75 with optimal $j=2$ ($3.43 \\\\pm 3.03$) Table 7? What\\u2019s the difference between these settings?\\n\\n**----Update after rebuttal-----**\\n\\nThe majority of my concerns have been addressed and I believe the paper offers a useful empirical investigation into how to speed up sampling with diffusion models for physics-based simulations. As such, I increased my score to 6.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their interest in our new results (revised manuscript is now available) and their follow up comments.\\n\\n> Regarding Fturb, are the test results based solely on two test trajectories (i.e., two initial conditions)? If so, wouldn\\u2019t it make sense to use a test dataset with more initial conditions to better assess the generalisation capabilities?\\n\\nThe $Fturb_{ext}$, $Fturb_{int}$, and $Fturb_{long}$ cases use two initial conditions per $Re$ and thus have a total of 6 trajectories, but with a much longer rollout length for $Fturb_{long}$. More details can now be found in Appendix A in the revised manuscript. While we could further increase the number of initial conditions per $Re$, we found in earlier tests that using 4 initial conditions per $Re$ yields very similar results with negligible variation in performance metrics. Therefore, to reduce the computational costs, we limit our assessment to two initial conditions only, which remains a fair and representative assessment of the generalization capabilities of the models.\"}", "{\"title\": \"Global Reponse\", \"comment\": \"We greatly appreciate the reviewers' positive comments on our methods' significant contribution to efficient sampling in fluid dynamics, as supported by extensive experiments, as well as the constructive feedback provided by all reviewers, which has been invaluable in refining our work. We acknowledge the concerns and suggestions raised, and we have made several revisions to address these points, enhancing the clarity and breadth of our contributions.\", \"we_summarize_the_major_upcoming_updates_which_will_be_included_in_our_revised_manuscript_in_the_next_couple_of_days\": [\"We shall update the title to emphasize our focus on fluid dynamics instead of general physics-based simulations. We propose the title \\\"Improved Sampling of Diffusion Models in Fluid Dynamics Simulations by Leveraging Tweedie's Formula,\\\" as it more precisely conveys the focus of our study.\", \"We will include our newly evaluated EDMs for 3 out of 4 experiments, comparing their performance against standard DDPMs and our proposed approaches. In summary:\", \"EDMs excel on $Tra$, achieving top accuracy with 4 NFEs, while TSM follows with almost same accuracy as the only probabilistic model enabling single-step inference with high accuracy. With a more accurately pre-trained DDPM, IR can potentially achieve more competitive results given the insignificant accuracy difference from the best-performing models.\", \"On $Fturb$, EDM is on par with TSM, albeit being $10\\\\times$ slower, whereas IR remains the most accurate.\", \"On $Air_{One}$, EDM clearly performs poorly, likely due to suboptimal hyperparameters.\", \"We conclude that, although EDMs may outperform standard DM baselines, they are less accurate and/or slower than our proposed approaches as demonstrated across a range of fluid dynamics problems.\", \"As requested, we will add the results for our temporal stability analysis study for the transient cases using very long rollout trajectories.\", \"We will add an appendix summarizing the main parameters for all our experiments, including benchmark ones, to ensure the clarity of our experimental setup.\", \"As requested by reviewer *fSso*, we will extend the discussion on the novelty of IR method to highlight the key differences with PDE-refiner and why the latter performs suboptimally compared to our methods.\", \"An additional appendix will be dedicated to describing a heuristic approach to efficiently optimize the hyperparameters associated to our approaches supported by the findings from our experiments.\"]}", "{\"summary\": \"The paper introduces a truncation approach and an iterative refinement process in the sampling procedure of the Denoising Diffusion Probabilistic Model (DDPM) that enable to reduce the number of function evaluations without decreasing the accuracy. The first method proposes to stop the sampling process at an earlier time point and to estimate the denoised sample using Tweedie's formula. The second method uses the forward diffusion for a given shorter noise schedule and the denoised sample is approximated using Tweedie's formula. The authors show the efficiency of the approach on the simulations of airflow field.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper reads very well and provides some good contributions to the development of diffusion models.\", \"Although the presented approach is based on some known results (Tweetie's formula, ancestral sampling), the proposed solution seems very efficient in practice and leads to lower computational costs. Both idea's exploit the denoising Tweedie's formula in forward and backward sampling.\", \"Including the truncation in the training is smart trick that eases the training.\", \"The numerical experiments show that the presented approaches achieve better performances than traditional DDPM sampling and reduce the computational costs.\", \"The approach seem to be very efficient in physic based simulation of flow field. I think this is a very good contribution in this specific domain.\"], \"weaknesses\": [\"Although the approach seems efficient in the presented numerical experiments, there is a number of points that need to be clarified.\", \"Tweedie's formula is well known. It seems that this has been already used in some previous works such of [Delbracio et Milanfar, 2024]. In their work, the authors provide some intermediate reconstructions through this formula. They show that by adding some stochastic steps\", \"they can get better performances than state-of-the-art. I know the model is not the same but it would have been interesting to highlight the links with this work because they seem very closely related. May the authors comment on that point?\", \"The results show that TS outperforms traditional DDPM by using s=1. This means that on this specific problem, there is no need to sample intermediate diffusion steps. I wonder whether this aspect is problem specific or this happens for a wider range of problems. Is this result expected?\", \"This would have been interesting to see how this method perform against traditional DDPM sampling on image reconstruction. Indeed, DDPMs usually perform well on such problems.\", \"The training data are deterministic sequences of flow field data. It would have been interesting to observe how the model performs on noisy dataset.\", \"Overall, I like the idea of truncation and iterative refinement, but since there is no major theoretical contribution in this work, I would have liked to see more numerical results. The claim \\\"Truncation is all you need\\\" would have been justified if the authors had included numerical results on different applications. So far, the paper makes very good contribution in this specific domain, and proposes an interesting approach to reduce computational costs.\", \"Delbracio et Milanfar, 2024, Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration. TMLR.\"], \"questions\": [\"Does s=1 mean that the model is trained as a single variational autoencoder? Or is this truncation only used for sampling? (paragraph about the training is not clear to me)\", \"naive question for my understanding: At the beginning of Section 5.1, the authors say that they consider time series of flow field from j= 1 to T. It is not clear how the time series are handled here. Could the authors clarify this point?\", \"How do the authors handle the high fluctuations areas of the domain? It seems that some region of the domain have a highly turbulence flow field (low pressure vortex) and this would require a more flexible model in this specific area.\", \"Did the authors try to change the value of the initial input $x_{init}$?\", \"I look forward to reading the answers of the authors and I can change my score depending on their answers.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Updated Manuscript\", \"comment\": [\"Dear reviewers,\", \"We have uploaded a revised manuscript with significant modifications to include the new results and your comments. These modifications are:\", \"Updated the title to be \\u201cImproved Sampling Of Diffusion Models In Fluid Dynamics With Tweedie's Formula\\u201d to clearly convey the focus of the paper.\", \"Highlighted text modifications in red to include discussions with the reviewers.\", \"Updated all respective tables and figures to include the results of EDM for the three experiments.\", \"Added plot (Fig. 7) for EDM sampling using various sampler configurations for all considered cases.\", \"Added temporal stability plot (Fig. 9) for the transient problems.\", \"Added Appendix A (experimental setup) and Appendix G (hyperparameters tuning).\", \"Upon acceptance, we will add the following for the camera-ready version:\", \"An additional experiment to evaluate our methods on the 1D Kuramoto-Sivashinsky problem as suggested by the reviewers.\", \"Training and evaluation of EDMs for the $Air_{Multi}$ case for comparison against our methods.\", \"Updating the code to include EDM training and sampling algorithms.\", \"We thank all reviewers for their valuable feedback, and we would like to invite them to check the revised manuscript.\"]}", "{\"comment\": \"Your answer has cleared my doubts to a certain extent. It seems that the Truncation Model does have broader advantages. I am very grateful and I have improved my score.\"}", "{\"comment\": \"We thank the reviewer for their comments and for highlighting the main advantages of our approaches in tackling efficient sampling with high accuracy and stability.\\n\\n**Mathematical interpretation of TSMs.** We appreciate the author's comment regarding the theoretical roots of TSMs. Truncation at low $t$ (i.e., using $s<0.2$) even for pre-trained DDPMs mitigates computational overhead without accuracy loss because the retained steps sufficiently capture the large scale and fine structures of the flow fields. At high $t$ (i.e., using $s\\\\gg0.2$), an accurate approximation for the posterior mean $\\\\mathbb{E}[\\\\hat{x}_0|x_t;\\\\theta]$ is only achievable by a model well-trained to approximate the score function (i.e., the gradient of the log-likelihood) for all $t \\\\in [s\\\\cdot T, T]$ and a target data distribution that permits a relatively simple reverse process, enabling high $s$ values. In fluid dynamics, the target distributions are often unimodal, which reduces the risk of bias introduced by Tweedie's formula, since the dominant mode of the distribution is captured with a relatively coarse discretization of the reverse-time SDE. This property supports truncation as it simplifies the sampling process without compromising the ability to accurately represent the underlying physics.\\n\\n**Achieving a balance between speed and accuracy in IR/TSM.** Thank you for raising this important point. Indeed, achieving a balance between speed and accuracy in IR sampling and TSMs requires careful tuning of the associated hyperparameters. These are dataset- and task-dependent parameters that need to be optimized on a case-by-case basis to serve the specific dynamics of the problem.\\nRegarding the efficient optimization of these hyperparameters, we provide a heuristic approach supported by our results:\\n\\n- Deterministic test cases\\n - IR: we start with $x_{init} \\\\sim \\\\mathcal{N}(0,I)$ and run our greedy optimization algorithm with low N (with $N = |\\\\gamma|$) to obtain an efficient $\\\\gamma$ with low NFEs. As in Figure 4(b), $N=5$ often yield very good results for transient cases. $N$ can then be gradually increased to explore other schedules that could potentially enhance the accuracy with mild increase in NFEs.\\n - TSM: For higher speedup, the search for the optimum $s$ value typically begins within the range $[0.5, 1]$, especially if a large number of diffusion steps $T$ is chosen. We believe that Fig (3) provides several insights for the optimal combination of $s$ and $T$. We first test for both extremes of $s$ and then follow a standard line search approach to arrive at an optimum value for $s$, requiring additional 2 or 3 evaluations at most. We also give priority for models with low $T$ to minimize the NFEs required for inference.\\n- Stochastic test cases\\n - IR: $x_{init}$ is optimal when obtained through truncated sampling of a pre-trained DDPM with $s >= 0.5$. The output from this approach is usually highly accurate but exhibits clear noise; thus it usually takes $N < 5$ for $\\\\gamma$ to arrive at a clear output while improving or retaining the accuracy of the noisy input. \\n - TSM: We follow the same procedure as before; however, we refrain from extreme $s$ values. Therefore, our search is restricted to $s \\\\in [0.5, 0.9]$, or smaller lower bound for low $T$ values. \\n\\nWhile a rigorous closed-form mathematical framework remains challenging, our heuristic approach was found to provide adequate results for our experiments, minimizing typical exhaustive parameter search. We will expand on this in the revised paper to improve clarity.\\n\\n**Increasing breadth of experiments.** We appreciate the reviewer\\u2019s suggestion to broaden our experimental scope. We have included an additional comparison against EDMs for 3 of our experiments. In summary, across different fluid dynamics problems, EDMs demonstrate varying performance. On the $Tra$ dataset, EDM achieves the best accuracy with only 4 NFEs, surpassing all probabilistic and deterministic models. TSM achieves nearly comparable accuracy while remaining the sole model capable of single-step inference. On $Fturb$, EDM outperforms DDPM and DDIM in both accuracy and NFEs but achieves comparable performance to TSM, despite being $10\\\\times$ slower, whereas IR remains the most accurate model for this dataset. However, on $Air_{One}$, EDM underperforms compared to all other models, likely due to suboptimal hyperparameter settings, including loss weighting, scaling, and diffusion parameters, which may require extensive tuning. Please see our response to reviewer *6cX6* for additional details on the EDM comparisons.\\n\\nRegarding additional PDE solutions, while we believe that the current experiments cover diverse and intricate fluid dynamics phenomena, including both deterministic and stochastic scenarios, we acknowledge the value of testing on broader PDE settings. We\\u2019d be happy to include an additional experiment on the 1D Kuramoto-Sivashinsky problem in the camera-ready version of our paper.\"}", "{\"summary\": \"This paper introduces Truncated Sampling Models (TSMs) and Iterative Refinement (IR) to improve the efficiency and fidelity of Denoising Diffusion Probabilistic Models (DDPMs) for fluid simulations by enabling reduced steps sampling through truncation of the diffusion process. These methods significantly reduce inference time and improve stability over long rollout horizons for turbulent flow and airfoil simulations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Interesting topics \\u2013 Generative diffusion models as surrogate model for fluid simulations.\", \"Clear motivation \\u2013 Reduce computation costs in the generative process using a novel reverse sampling approach.\"], \"weaknesses\": [\"The paper lacks clarification of superiority of the proposed methods over other surrogate models, such as neural operators which are currently the most widely used ML-based surrogate models in fluid dynamics for speed and accuracy. Given that a primary contribution of this paper is reducing time costs, a more thorough comparison with advanced neural operators \\u2013 either highlighting the proposed method\\u2019s improved accuracy at similar time costs or its time efficiency at comparable accuracy \\u2013 would strengthen the argument. However, there are few comparisons with neural operators in the experiments in main text; although Unet is included, more advanced neural operators should be included. Additionally, the proposed method does not appear to clearly outperform Unet.\", \"The title may mislead some readers, as \\u201cphysics-based simulations\\u201d implies a broad range of applications, while the paper is mostly on fluid dynamics. To improve clarity, I recommend replacing \\u201cphysics-based simulations\\u201d to \\u201cFluid Dynamics Simulations\\u201d. Alternatively, the authors could clarify whether they intend to generalize their approach to other physics-based simulations beyond fluid dynamics or provide examples of how the method could be applicable to other physics domains.\"], \"questions\": \"- Could you theoretically or intuitively clarify the effectiveness of the pre-trained diffusion model in the IR sampling procedure at noise level $t = \\\\gamma $? The distribution $x_{init}$ at noise level $t = \\\\gamma $ appears to differ from the distribution $x_0$ at the same level $t = \\\\gamma $. Additionally, in Equation 6, is it ensured that the error between the final output and the $x_0$ remains sufficiently small, such that $E[\\u2016x _0^N - x_0 \\u2016_2 ]<\\\\epsilon $?\\n\\n- Could you also clarify (experimental support would be helpful) line 288 (2), which states that IR sampling does not require data consistency to $\\\\hat{x}_0$? I mean why the proposed method does not require the data-consistency? Enforcing data consistency to $\\\\hat{x}_0$ could be plugged in after line 7 in Algorithm 2 to improve accuracy in a single iteration, without compromising the number of iterations needed. Also, related to the above question, data consistency could help reduce the error towards zero, such as [1] and [2]. \\n\\n[1] A physics-informed diffusion model for high-fidelity flow field reconstruction, 2023.\\n\\n[2] Diffusionpde: Generative pde-solving under partial observation, 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"See above.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces two techniques, truncation and iterative refinement, aimed at improving the efficiency of diffusion models by reducing the number of function evaluations while maintaining, and even increasing, the statistical accuracy of the samples. Both techniques leverage the well-known Tweedie's formula and are evaluated in steady and time-dependent CFD problems.\\n\\nReviewers highlight the paper's success in demonstrating that these techniques significantly reduce NFEs, leading to faster sampling and improved stability. The experimental validation on turbulent and airfoil flow datasets is considered appropriate and supports the claim that these methods maintain (or enhance) accuracy compared to standard DDPMs.\\n\\nHowever, as only examples for CFD are presented, it is unclear how well these techniques would be applicable to broader ML-related tasks. In addition, the claim regarding long rollouts appears to be based on relatively short rollouts particularly when compared to recent works in this area (see [1, 2, 3]). \\n\\nAfter considering the strengths and weaknesses of the paper, I recommend acceptance.\", \"references\": \"[1] Jiang, Ruoxi, et al. \\\"Training neural operators to preserve invariant measures of chaotic attractors.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[2] Schiff, Yair, et al. \\\"DySLIM: Dynamics Stable Learning by Invariant Measure for Chaotic Systems.\\\" Forty-first International Conference on Machine Learning (2024).\\n\\n[3] Li, Zongyi, et al. \\\"Learning chaotic dynamics in dissipative systems.\\\" Advances in Neural Information Processing Systems 35 (2022): 16768-16781.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers liked the simplicity of the ideas, but they complained about the title, which was then modified. They also raised issues bout the limited scope of the paper (flow field simulation) and lack of theoretical guarantees.\"}", "{\"title\": \"Reply (2/2)\", \"comment\": \"**Comment regarding the title.** We appreciate the reviewer\\u2019s positive feedback on our truncation and IR approach. We acknowledge that our title, \\u201cTruncation is all you need,\\u201d could be misleading to the non-specialist audience as it implies broader generalizability across domains beyond fluid dynamics. As suggested, we will revise it to more accurately reflect our focus on physics-based simulation tasks.\\n\\n> Q: Does s=1 mean that the model is trained as a single variational autoencoder? Or is this truncation only used for sampling? (paragraph about the training is not clear to me)\\n\\nIndeed, $s$ is used both during training and sampling; thus, when $s=1$, this corresponds to single-step inference. The noise step $t$ condition for the network in this case can be omitted during training and sampling.\\n\\n> At the beginning of Section 5.1, the authors say that they consider time series of flow field from j= 1 to T. It is not clear how the time series are handled here. Could the authors clarify this point?\\n\\nWe would like to clarify that $j$ is the prediction stride that defines the temporal intervals at which we sample future states of the flow field based on the physical timestep $\\\\delta \\\\tau$. It is part of our reformulated autoregressive sampling method detailed in Section 2.2. It is a discrete parameter that takes values between 0 and $\\\\mathcal{T}$ (i.e., $\\\\mathcal{T} + 1$ possible values) that is used as a condition for the network and is independent from the fluid flow sequences with $R$ timesteps. To reach the target time $\\\\tau_f$ which is $R$ steps from the initial condition, we can take up to $\\\\lceil R/j \\\\rceil$ steps, which is maximum when $j=1$ (i.e., next-step prediction) and minimum when $j=\\\\mathcal{T}$ (i.e., temporal stride of $\\\\mathcal{T}$ steps).\\n\\n> How do the authors handle the high fluctuations areas of the domain?\\n\\nWe appreciate the reviewer\\u2019s concern regarding high-fluctuation regions which are definitely critical in fluid dynamics modeling. Typically in DDPMs and other deep learning-based surrogates, and by extension our approaches, we don't explicitly target specific physical phenomena; instead, we rely on the learning process to capture these dynamics intrinsically. By training our models on high-fidelity datasets, we ensure they learn the data distribution and the underlying physical characteristics of the flow field, including turbulent structures such as vortices.\\n\\n> Did the authors try to change the value of the initial input $x_{init}$?\\n\\nFor the $Tra$ and $Fturb$ test cases, we found that IR with $x_{init} = x_T$ provides highly accurate predictions with low NFEs; therefore, we didn't consider other $x_{init}$ values that might incur extra computational cost without adequate increase in accuracy. In the $Air$ cases, we experimented with $x_{init} = x_T$ as well as values obtained with truncated ancestral sampling (i.e., a pre-trained traditional DDPM sampled using Algorithm 1). We found the latter to achieve the most accurate predictions with the least possible NFEs.\"}" ] }
0FK6tzqV76
RTDiff: Reverse Trajectory Synthesis via Diffusion for Offline Reinforcement Learning
[ "Qianlan Yang", "Yu-Xiong Wang" ]
In offline reinforcement learning (RL), managing the distribution shift between the learned policy and the static offline dataset is a persistent challenge that can result in overestimated values and suboptimal policies. Traditional offline RL methods address this by introducing conservative biases that limit exploration to well-understood regions, but they often overly restrict the agent's generalization capabilities. Recent work has sought to generate trajectories using generative models to augment the offline dataset, yet these methods still struggle with overestimating synthesized data, especially when out-of-distribution samples are produced. To overcome this issue, we propose RTDiff, a novel diffusion-based data augmentation technique that synthesizes trajectories *in reverse*, moving from unknown to known states. Such reverse generation naturally mitigates the risk of overestimation by ensuring that the agent avoids planning through unknown states. Additionally, reverse trajectory synthesis allows us to generate longer, more informative trajectories that take full advantage of diffusion models' generative strengths while ensuring reliability. We further enhance RTDiff by introducing flexible trajectory length control and improving the efficiency of the generation process through noise management. Our empirical results show that RTDiff significantly improves the performance of several state-of-the-art offline RL algorithms across diverse environments, achieving consistent and superior results by effectively overcoming distribution shift.
[ "Reinforcement Learning", "Diffusion Model", "Reverse Synthesize" ]
Accept (Poster)
https://openreview.net/pdf?id=0FK6tzqV76
https://openreview.net/forum?id=0FK6tzqV76
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yKlWAyvX0i", "xaU5TCSYU7", "wuBQ78mLUP", "v1tYbXlH12", "rpsbFEJuc8", "qT6N3WO3KO", "oEQOuqQ3Dt", "nqveT8KtWF", "lzUbMeJzRH", "Zx5FOkBBdM", "Z9XHJdPdVt", "VSleHcn9Bf", "UoM0p7oVOg", "TbSpacElfd", "S5RhAVz3RX", "QQJAPT9YxU", "PVZAFyx9rY", "PGE7YrRvgJ", "CTcdO3a3vf", "BkK1zjrPjI", "Bin8BVqKCY", "AOKjNV3Jzt", "A5TzBg0Qkf", "9WxREqrg5W", "1aAcZ66Gz3", "0YfwUQhJ7g" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732765978413, 1732735962504, 1732658882151, 1730638056802, 1732542602621, 1732542690642, 1732542494724, 1732602511011, 1737523652148, 1734768838897, 1732542426745, 1732658907312, 1733307355893, 1733225224885, 1733155933814, 1730449753508, 1732542276565, 1732542159107, 1732765337275, 1732542135413, 1732602049941, 1733024733062, 1732542318240, 1730643501669, 1732544756669, 1729776059003 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Reviewer_FUCB" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Reviewer_exWn" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4625/Area_Chair_jn7J" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Reviewer_exWn" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Reviewer_NMzW" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Reviewer_LuMS" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Reviewer_NMzW" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Authors" ], [ "ICLR.cc/2025/Conference/Submission4625/Reviewer_LuMS" ], [ "ICLR.cc/2025/Conference/Submission4625/Reviewer_FUCB" ], [ "ICLR.cc/2025/Conference/Submission4625/Reviewer_exWn" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer LuMS\", \"comment\": \"Thank you so much for your response and for increasing the score! We are glad that our rebuttal has addressed the concerns raised. If you have any further questions, we are more than happy to discuss them. We will incorporate all your comments and suggestions into our revision, which are invaluable for improving the quality of our work.\"}", "{\"title\": \"Response to Reviewer exWn\", \"comment\": \"Thank you so much for your response and additional comments on the paper. We are very glad to address your remaining concern regarding our contributions in comparison to ROMI.\\n\\nFirst, we would like to highlight that the use of diffusion models for data augmentation in RL is a relatively emerging and underexplored area. We are the first work that introduces diffusion models for reserve trajectory synthesis. We believe this represents a significant contribution that should not be undervalued simply because of ROMI \\u2013 as clarified in our previous response, ROMI still belongs to the traditional model-based methods.\\n\\nSecond, we would like to emphasize that our motivation for applying diffusion models to trajectory generation extends beyond merely reducing inaccuracies compared to ROMI. Diffusion models possess several key advantages that we aim to leverage:\\n- Diffusion models are particularly effective at generating **longer**, more accurate trajectories compared to traditional model-based methods like ROMI. Longer trajectories provide richer and more consistent information for offline RL agents, leading to improved performance. Our approach includes an out-of-distribution (OOD) detector that **adaptively determines trajectory length**, further enhancing the utility of long trajectories generated by diffusion models.\\n- As a traditional model-based method, ROMI relies on **separate learned dynamics models and rollout strategies, introducing additional complexity**. Unlike ROMI and model-based methods in general, diffusion models naturally integrate these processes, mitigating such challenges. Moreover, diffusion models allow for innovative techniques, such as the noise control mechanism proposed in our work.\\n- We believe our work establishes a more general and flexible framework for reverse trajectory synthesis, creating opportunities to leverage advancements in diffusion-based data generation. For example, exploring guidance-based diffusion generation to produce more tailored trajectories could be an interesting direction for future research.\\n\\nThird, our work also introduces key techniques to optimize diffusion-based trajectory generation: 1) **OOD Detector for Length Control:** This mechanism is vital for generating trajectories that extend beyond the offline data distribution, introducing novel environmental information while ensuring reliability by preventing trajectories from deviating excessively. 2) **Noise Control Strategy:** This generalizable technique enhances the diversity of generated trajectories, particularly when sample size is limited, and is applicable to any diffusion-based trajectory generation approach.\\n\\nFinally, **to clarify that vanilla diffusion models alone cannot match the performance of our RTDiff, we conducted an additional ablation study**. This study involved using diffusion models to generate reverse trajectories *without* incorporating the other components of our method. The results, presented below, show that na\\u00efve diffusion models significantly underperform compared to our RTDiff. Furthermore, when comparing the performance of vanilla diffusion models with ROMI, the improvements, if any, are inconsistent and relatively minor.\\nTherefore, the results validate our contribution, demonstrating that **it extends beyond the straightforward application of diffusion models**.\\n \\n| | IQL+RTDiff| IQL+ DM(vanilla)|IQL+ROMI | TD3BC+RTDiff| TD3BC+DM(vanilla)| TD3BC+ROMI|\\n| ------ | ------------------ | ---- | ---- | ---- | ---- | ---- |\\n|maze2d-umaze| 8.3 |4.3|5.4|10.2|9.3|9.6|\\n|maze2d-medium | 3.3 |2.3|2.1|9.8|8.9|9.4|\\n|maze2d-large | 14.3 | 9.0|8.1|7.7|4.3|3.5|\\n\\nWe hope our response has addressed your remaining concerns. If you have any further comments, we are more than happy to discuss them. We will incorporate all your comments and suggestions into our revision, which are invaluable for improving the quality of our work.\"}", "{\"title\": \"Reply to Reviewer NMzW\", \"comment\": \"Thank you so much for your response and for increasing the score! We are glad that our rebuttal has addressed the concerns raised.\\nIf you have any further questions, we are more than happy to discuss them. We will incorporate all your comments and suggestions into our revision, which are invaluable for improving the quality of our work.\"}", "{\"summary\": \"This paper proposes RTDiff, a novel diffusion-based data augmentation technique that synthesizes trajectories in a reverse direction. Such reverse generation naturally mitigates the risk of overestimation by ensuring that the agent avoids planning through unknown states. RTDiff also introduces some other tricks including flexible trajectory control and noise management to improve sythesis quality. Emprirical results show the advantage of reverse generation over forward generation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The article is well-written, with clear expression and logic, effectively reflecting the main argument.\\n2. The experiments are very comprehensive. The authors validate the effectiveness of their method across a series of tasks, including both proprioceptive observations and visual observations.\", \"weaknesses\": \"1. I believe the authors miss a key related work (called ROMI [1]), which is the first to propose using reverse trajectory generation in the field of offline reinforcement learning. The motivation described for reverse trajectory generation in these two work is also very similar. Therefore, considering that this paper is neither the first to propose the use of reverse trajectory generation in offline RL nor the first to use diffusion for data augmentation, I would say the novelty of this paper is quite limited.\\n2. I think this paper lacks comparisons with model-based offline RL methods in the experimental section. According to my understanding, using a diffusion model for data augmentation essentially falls under the same category as previous model-based offline RL methods. For instance, MOPO [2] essentially generates a batch of synthetic samples to supplement the original samples. Therefore, the authors should compare some model-based offline RL methods, such as MOPO, RAMBO [3], etc.\\n\\n[1] Wang et al. \\\"Offline Reinforcement Learning with Reverse Model-based Imagination\\\" (NeurIPS'21)\\n[2] Yu et al. \\\"MOPO: Model-based Offline Policy Optimization\\\" (NeurIPS'20)\\n[3] Rigter et al. \\\"RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning\\\" (NeurIPS'22)\", \"questions\": \"1. Can the authors discuss in detail the differences between this paper and previous similar works (like ROMI)?\\n2. This method is built on IQL, TD3BC, and CQL. Have there been any adjustments to the hyperparameters of these methods after using data augmentation?\\n3. Is it possible to conduct a quantitative evaluation of the quality of the generated trajectories? For example, assessing the model error of the generated trajectories, etc.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer exWn\", \"comment\": \"Thanks for your insightful and inspiring comments! We provide the following clarifications in response to your concerns.\\n1. **W1: Comparison with ROMI**\\n- We thank the reviewer for highlighting this related work. We have updated the paper to include a comparison of this work in Appendix C.5.\\nWe want to emphasize that our approach is significantly different from ROMI. First, ROMI focuses on learning the environment's dynamics and employing a reverse rollout strategy based on reverse dynamics. In contrast, our method directly generates trajectories using diffusion models, which avoids the inaccuracies inherent in dynamics learning and rollout strategies, resulting in higher-quality generated trajectories.\\n- Additionally, our work introduces a novel strategy to fully leverage the diffusion model's ability to generate reliable long-length trajectories. Specifically, we incorporate an OOD detector to control the generation length. This detector ensures that the generated trajectories extend beyond the offline data distribution, introducing new environmental information to enhance offline RL algorithms. Simultaneously, it prevents trajectories from straying too far from the distribution, maintaining reliability and utility.\\n- Furthermore, we develop a noise control strategy that can be applied to any diffusion-based data augmentation method. This technique is particularly effective when the number of samples is small and can be utilized with any diffusion-based data augmentation approach for RL.\\n- Overall, our work introduces several distinct techniques compared to ROMI. To empirically demonstrate the advantages of RTDiff, we include ROMI as a baseline in our experiments. The results clearly show that RTDiff outperforms ROMI.\\n| | IQL+RTDiff| IQL+ROMI| TD3BC+RTDiff| TD3BC+ROMI|\\n| ------ | ------------------ | ---- | ---- | ---- |\\n|maze2d-umaze| 8.3 |5.4|10.2|9.6|\\n|maze2d-medium | 3.3 |2.1|9.8|9.4|\\n|maze2d-large | 14.3 | 8.1|7.7|3.5|\\n2. **W2: Visualization of the experiments**\\n- Thanks for the valuable comments. To better illustrate the generated trajectories in real environments used in the experiments, we have presented a new visualization of generated trajectories in the Maze2D environment in Appendix D of the updated version of the paper.\\n3. **W3: Typos**\\n- Thanks for pointing this out. We have fixed this issue in the updated version of the paper.\\n4. **Analysis of visual environments**\\n- First, we would like to clarify that we have already included the experimental results in the visual reinforcement learning environments in Section 5.3 of our original paper. \\n- Here we provide an additional analysis in the Meta-world environment, similar to that presented in Section 6.. We choose Coffee Push as the example task. The results show that even in the visual environments, RTDiff still generates fewer risky transitions and more useful transitions.\\n\\n| | In2Out| Out2In | In2In | Out2Out |\\n| ------ | ------ | ---- | ------ | --- |\\n|RTDiff| 1.3% | 17.5% | 56.8% | 24.4% | \\n|Forward (w/o OOD) | 16.0% | 3.9% | 68.2% | 11.9% |\\n|Forward (w/ OOD) | 19.4% | 5.6% | 57.2% | 17.8% |\"}", "{\"title\": \"General Response\", \"comment\": \"We are deeply grateful for the time and effort all the reviewers have invested in reviewing our manuscript. The reviewers\\u2019 constructive suggestions have been invaluable in enhancing the quality of our work. We particularly appreciate reviewers\\u2019 remarks acknowledging the strengths of our work:\\n- (LuMS) The idea of this paper is novel.\\n- (LuMS, FUCB, NMzW) The empirical results are comprehensive and the performance is good.\\n\\nIn response to the insightful questions raised, we have provided detailed clarifications in the direct responses. These elaborations aim to address the reviewers\\u2019 concerns comprehensively. We hope that the clarification and additional experimental results resonate with the reviewers\\u2019 expectations and adequately address the issues highlighted.\\n\\nIn particular, we have discussed the differences between RTDiff and ROMI, focusing on both technical distinctions and empirical comparisons. We hope this discussion effectively addresses the reviewers' questions.\\n\\nIn the updated version of the paper, we have added more experiments, including comparisons with additional baselines, quantitative evaluations of the generated samples, and more ablation studies in the appendix. The updated sections are highlighted in orange.\"}", "{\"title\": \"Response to Reviewer NMzW (part 2/2)\", \"comment\": \"3. **W3: Fidelity of the generated trajectories**\\nThanks for your valuable comments. We have added a section in Appendix C.6 of the updated version of the paper to include the quantitative evaluation of generated trajectories.\\nBelow we show the fidelity of the generated trajectories. To measure the fidelity of the generated samples, we follow the previous works using two statistics: Marginal: Mean Kolmogorov-Smirnov [Ref1] and Correlation: Mean Correlation Similarity [Ref2]. As expected, the results show that RTDiff does not aim to generate more realistic trajectories, but rather to produce more diverse samples that lie outside the distribution, thereby benefiting the RL performance. This is because RTDiff generates adaptive, longer trajectories compared with other baselines, attributed to our proposed OOD detector and reverse synthesis model.\\nTherefore, we would like to further emphasize that, while we agree with the reviewer that fidelity is an important factor in assessing data generation in general, our focus here is more on the \\\"usefulness\\\" of the generated data, specifically how it improves RL performance.\\n\\n| Dataset | RTDiff Marginal $\\\\uparrow$ | RTDiff Correlation $\\\\uparrow$ | SynthER Marginal $\\\\uparrow$ | SynthER Correlation $\\\\uparrow$ | ATraDiff Marginal $\\\\uparrow$ | ATraDiff Correlation $\\\\uparrow$ |\\n|--------------------|------------------|---------------------|---------------|------------------|---------------|------------------|\\n| hopper-medium | 0.932 | 0.983 | 0.985 | 0.998 | 0.967 | 0.994 |\\n| hopper-medexp | 0.953 | 0.989 | 0.958 | 0.992 | 0.963 | 0.994 |\\n| hopper-expert | 0.941 | 0.985 | 0.934 | 0.982 | 0.953 | 0.991 |\\n\\n[Ref1] The kolmogorov-smirnov test for goodness of fit. Frank J. Massey Jr. 1951.\\n\\n[Ref2] Tests for rank correlation coefficients. E. C. Fieller, et al. 1957.\"}", "{\"comment\": \"My main concern is still the contribution, as the idea of employing a reverse rollout has been introduced previously, thus limiting this paper to decreasing inaccuracies with diffusion, which largely reduces the novelty. Hence, I maintain the score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"RTDiff proposes a novel diffusion-based approach for reverse trajectory synthesis in offline reinforcement learning, addressing the critical issue of distribution shift with mechanisms like an OOD detector and noise control. The authors provide extensive empirical evidence showcasing significant improvements over baselines, including ROMI, and demonstrate that the reverse synthesis strategy generates more reliable trajectories, enhancing agent performance. While concerns about the novelty relative to ROMI and computational efficiency were raised, the inclusion of diffusion models introduces a unique generative capability, and the provided ablation studies validate the contribution of each component. The paper represents a meaningful step forward in diffusion-based RL research and should be accepted.\", \"additional_comments_on_reviewer_discussion\": \"The discussion centered on comparisons with ROMI, computational costs, and the novelty of the reverse synthesis approach. Critical reviewer exWn argued that the contribution was incremental and highlighted inefficiencies in diffusion-based generation, while others, including LuMS, FUCB, and NMzW, were convinced by the authors' added comparisons, ablations, and clarified novelty claims. The authors effectively differentiated RTDiff from ROMI by emphasizing its reliance on diffusion models and added mechanisms like OOD detectors and noise control, ultimately persuading most reviewers to support the paper. These additions and the strong empirical results outweighed the remaining concerns, leading to a recommendation for acceptance.\"}", "{\"title\": \"Response to Reviewer NMzW (part 1/2)\", \"comment\": \"Thanks for your insightful and inspiring comments! We provide the following clarifications in response to your concerns.\\n1. **W1: Comparison with ROMI**\\n- We thank the reviewer for highlighting this related work. We have updated the paper to include a comparison of this work in Appendix C.5.\\nWe want to emphasize that our approach is significantly different from ROMI. First, ROMI focuses on learning the environment's dynamics and employing a reverse rollout strategy based on reverse dynamics. In contrast, our method directly generates trajectories using diffusion models, which avoids the inaccuracies inherent in dynamics learning and rollout strategies, resulting in higher-quality generated trajectories.\\n- Additionally, our work introduces a novel strategy to fully leverage the diffusion model's ability to generate reliable long-length trajectories. Specifically, we incorporate an OOD detector to control the generation length. This detector ensures that the generated trajectories extend beyond the offline data distribution, introducing new environmental information to enhance offline RL algorithms. Simultaneously, it prevents trajectories from straying too far from the distribution, maintaining reliability and utility.\\n- Furthermore, we develop a noise control strategy that can be applied to any diffusion-based data augmentation method. This technique is particularly effective when the number of samples is small and can be utilized with any diffusion-based data augmentation approach for RL.\\n- Overall, our work introduces several distinct techniques compared to ROMI. To empirically demonstrate the advantages of RTDiff, we include ROMI as a baseline in our experiments. The results clearly show that RTDiff outperforms ROMI.\\n| | IQL+RTDiff| IQL+ROMI| TD3BC+RTDiff| TD3BC+ROMI|\\n| ------ | ------------------ | ---- | ---- | ---- |\\n|maze2d-umaze| 8.3 |5.4|10.2|9.6|\\n|maze2d-medium | 3.3 |2.1|9.8|9.4|\\n|maze2d-large | 14.3 | 8.1|7.7|3.5|\\n2. **W2: Explanation of the idea**\\n- We want to clarify that *the core idea of this work is not to directly reduce the ratio of generated data that falls outside the offline data distribution but rather to mitigate the issues caused by such trajectories*. The problem with data distribution arises from the overestimation of transitions in the fake trajectories generated during data augmentation. These overestimated transitions can directly impact the strategies learned by the agents, leading to degraded performance.\\n- Our approach addresses this issue by performing synthesis in reverse order. This ensures that even if some trajectories deviate from the offline data distribution, the overestimation of such transitions does not directly harm performance.\\n- As analyzed in Section 6, we categorize transitions into four types, where only the In2In transitions are strictly within the offline data distribution. Among the other three categories, the In2Out transitions are particularly detrimental to performance. Our goal is not to eliminate the other three categories entirely or ensure that all generated transitions are in-distribution. Instead, we focus on reducing the number of In2Out transitions.\\n- As demonstrated in Table 7, reverse synthesis can even result in fewer In2In transitions, which are less useful for data augmentation. This finding further validates the effectiveness of our approach.\"}", "{\"title\": \"Reply to Reviewer FUCB\", \"comment\": \"Thank you so much for your response and for increasing the score! We are glad that our rebuttal has addressed the concerns raised.\\nIf you have any further questions, we are more than happy to discuss them. We will incorporate all your comments and suggestions into our revision, which are invaluable for improving the quality of our work.\"}", "{\"title\": \"Reply to Reviewer exWn\", \"comment\": \"Thank you for your valuable comments and your appreciation of our work. However, we respectfully disagree with the opinion that the additional components in our work represent a natural progression from ROMI in applying diffusion.\\n\\nFirst, we would like to reiterate that our work is substantially different from traditional model-based methods like ROMI and demonstrates notable strengths, as detailed in our previous responses, validated through our experimental comparison, and acknowledged by Reviewers **LuMS**, **FUCB**, and **NMzW**.\\n\\nSecond, we would like to emphasize that the crucial components, such as generation length control, leverage the unique properties of diffusion models, making our approach distinct from a straightforward extension of ROMI. Additionally, to the best of our knowledge, we are **the first to utilize an out-of-distribution (OOD) detector to control the generation length for reinforcement learning data augmentation**. **It is a non-trivial insight to introduce the OOD detector for our reverse synthesis model, as forward synthesis methods cannot benefit from the OOD detector as empirically validated**.\\n\\nFurthermore, we would like to kindly point out the promising value and recognition of our work within the community by referring to publications of a similar nature. For example, [1] (one of our forward synthesis baselines) trained a diffusion model on an offline dataset to generate additional transitions. Despite sharing similarities with earlier model-based data augmentation methods, [1] was accepted at NeurIPS 2023.\\n\\nRegarding the concern about increased computational cost due to the diffusion model, we would like to clarify that, as mentioned in Section 4.1 of our paper (Line 237), we use a **lightweight architecture for the diffusion model**, which differs from typical diffusion architectures used for image generation and requires significantly fewer resources. Additionally, we want to emphasize that, as demonstrated by previous research [1,2,3], the primary bottleneck in reinforcement learning often lies in interacting with the environment, leading researchers to prioritize the number of samples used. Therefore, we respectfully suggest that the criticism regarding running time may be somewhat misplaced.\\n\\n\\n[1] Synthetic Experience Replay. NeurIPS, 2023.\\n\\n[2] Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning. NeurIPS, 2023.\\n\\n[3] Accelerating Online Reinforcement Learning with Imaginary Trajectories. ICML, 2024.\"}", "{\"comment\": \"Thank you for your response. I appreciate your efforts in clarifying the manuscript. However, I believe that the additional components of your method represent a natural progression in applying the diffusion model for data generation, inspired by the ROMI framework. Additionally, it's important to consider that using a diffusion model for trajectory generation can lead to increased time and computational resource costs.\"}", "{\"title\": \"Reply to Reviewer exWn\", \"comment\": \"As the discussion period will come to an end in less than 24 hours, we would like to send you a reminder about our responses as above to solve your concerns. Please check whether your concerns have been addressed. We are sincerely looking forward to hearing from you, and are always happy to have more discussions with you!\"}", "{\"summary\": \"This paper introduces a novel diffusion-based data augmentation method, RTDiff, for offline reinforcement learning. First, RTDiff mitigates the data distribution shift issue present in previous data augmentation methods by generating reverse trajectories instead of forward ones. Second, RTDiff trains an out-of-distribution (OOD) detector to truncate the OOD segments of generated trajectories, further enhancing sample authenticity. Finally, the authors propose a new noisy control method to improve sample generation efficiency. Experimental results validate the effectiveness and efficiency of RTDiff and different components across both vector-based and pixel-based tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well-written and easy to follow.\\n2. RTDiff introduces reverse trajectory generation, an OOD detector, and a noisy control method to achieve efficient and reliable sample generation. These approaches are intuitively sound.\\n3. The authors conduct extensive benchmark and ablation experiments. Experimental results demonstrate that RTDiff outperforms previous baselines on both vector-based and pixel-based tasks, and each component\\u2014reverse trajectory generation, the OOD detector, and noisy control\\u2014exhibits its effectiveness.\", \"weaknesses\": \"1. Reverse trajectory generation is not new to offline RL, and the paper lacks a discussion and experimental comparison of prior works, such as [1].\\n2. The paper lacks a clear and well-reasoned explanation of the issues with previous data augmentation methods. In lines 88-90, this paper claims that previous data augmentation methods suffer from the data distribution shift, potentially leading to value overestimation. However, since all of these works use offline RL algorithms, I think data distribution shift is not the key issue. On the contrary, data augmentation is only effective when the generative model can produce samples that differ from the training data.\\n3. Generated data fidelity is a more critical factor. The paper lacks a quantitative evaluation of the fidelity of generated samples.\\n\\n[1] Offline Reinforcement Learning with Reverse Model-based Imagination. NeurIPS, 2021.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer FUCB (part 1/2)\", \"comment\": \"Thanks for your insightful and inspiring comments! We provide the following clarifications in response to your concerns.\\n\\nIn particular, we discuss in detail how our approach significantly differs from ROMI and model-based RL methods, providing empirical comparisons that validate our approach's superior performance. Additionally, we would like to emphasize that the use of diffusion models for data augmentation in RL is a relatively emerging area. Prior works such as SynthER and ATraDiff also focus on using diffusion models for data augmentation, with their novelty stemming from the specific techniques used to exploit diffusion models. Within this context, we believe the novelty of our work should not be diminished solely because we are not the first to use diffusion for data augmentation. \\n\\n1. **W1/Q1: Comparison with ROMI**\\n- We thank the reviewer for highlighting this related work. We have updated the paper to include a comparison of this work in Appendix C.5.\\nWe want to emphasize that our approach is significantly different from ROMI. First, ROMI focuses on learning the environment's dynamics and employing a reverse rollout strategy based on reverse dynamics. In contrast, our method directly generates trajectories using diffusion models, which avoids the inaccuracies inherent in dynamics learning and rollout strategies, resulting in higher-quality generated trajectories.\\n- Additionally, our work introduces a novel strategy to fully leverage the diffusion model's ability to generate reliable long-length trajectories. Specifically, we incorporate an OOD detector to control the generation length. This detector ensures that the generated trajectories extend beyond the offline data distribution, introducing new environmental information to enhance offline RL algorithms. Simultaneously, it prevents trajectories from straying too far from the distribution, maintaining reliability and utility.\\n- Furthermore, we develop a noise control strategy that can be applied to any diffusion-based data augmentation method. This technique is particularly effective when the number of samples is small and can be utilized with any diffusion-based data augmentation approach for RL.\\n- Overall, our work introduces several distinct techniques compared to ROMI. To empirically demonstrate the advantages of RTDiff, we include ROMI as a baseline in our experiments. The results clearly show that RTDiff outperforms ROMI.\\n| | IQL+RTDiff| IQL+ROMI| TD3BC+RTDiff| TD3BC+ROMI|\\n| ------ | ------------------ | ---- | ---- | ---- |\\n|maze2d-umaze| 8.3 |5.4|10.2|9.6|\\n|maze2d-medium | 3.3 |2.1|9.8|9.4|\\n|maze2d-large | 14.3 | 8.1|7.7|3.5|\\n2. **W2: Comparisons with model-based RL**\\n- We thank the reviewer for highlighting that model-based RL methods can also serve as strategies for offline data augmentation. However, the generation quality of model-based methods heavily depends on both the accuracy of the learned dynamics and the rolling-out strategy used to produce trajectories. These factors can make it challenging to achieve reliable data augmentation for RL. It is worth noting that ROMI is an example of a model-based data augmentation method.\\n- Our approach, in contrast, incorporates several novel components specifically designed to improve the quality of generated data, such as the OOD detector and noise control mechanisms.\\n- To provide an empirical comparison between our method and model-based RL approaches, we have included an additional experiment in Appendix C.5 of the updated version of the paper, comparing our method to MOPO as suggested by the reviewer. The results below demonstrate that our method outperforms model-based methods in terms of performance, further highlighting its effectiveness.\\n| | IQL+RTDiff| IQL+MOPO| TD3BC+RTDiff| TD3BC+MOPO|\\n| ------ | ------------------ | ---- | ---- | ---- |\\n|maze2d-umaze| 8.3 | 5.1 | 10.2 | 9.6 |\\n|maze2d-medium | 3.3 | 1.7 | 9.8 | 8.8 |\\n|maze2d-large | 14.3 | 5.9 | 7.7 | 2.6 |\\n3. **Q2: RL methods hyperparameters**\\n- After the data augmentation, we did not adjust any hyperparameters of the RL methods.\"}", "{\"title\": \"Response to Reviewer LuMS (part 2/2)\", \"comment\": \"5. **Theoretical contributions**\\n- This paper primarily focuses on algorithmic development and empirical evaluation, rather than on theoretical analysis or formal proofs. Similar to previous works like ROMI, SynthER, and ATraDiff, our objective is to propose practical methodologies and validate their effectiveness through comprehensive experimental results. We leave the exploration of theoretical guarantees and formal proofs as interesting future work.\"}", "{\"comment\": \"Thanks for the authors' detailed response and the addition of comprehensive experimental results to further verify the effectiveness of the proposed method. My concerns have been addressed. I also read other reviews and the authors' corresponding responses. Other reviewers are concerned about the novelty of the proposed reverse trajectory generation approach compared to ROMI. In their rebuttal and the revision, the authors have explained the differences between ROMI and their approach. Also, experimental results show the superiority of RTDiff over ROMI. To me, the explanation and results are convincing, and the score has been raised.\"}", "{\"title\": \"Response to Reviewer LuMS (part 1/2)\", \"comment\": \"Thanks for your insightful and inspiring comments! We provide the following clarifications in response to your concerns:\\n\\n1. **Incorporating OOD detector with other baselines**\\n\\n- Thanks for the valuable question. We would like to clarify that the *OOD detector is particularly useful for our reverse synthesis model, where forward synthesis methods cannot benefit from the OOD detector*. \\n- Specifically, in our method, the OOD detector is used to control the length of the generated trajectories. This is crucial because we aim to generate trajectories that extend beyond the offline data distribution to provide new information to the agent, while ensuring they do not deviate too far, which could reduce their usefulness and increase risk. The effectiveness of our OOD detector is demonstrated in Tables 4 and 12. Our results show that the OOD detector outperforms any fixed-length generation strategy. Additionally, we observed that setting the threshold too high or too low negatively impacts performance.\\n- To better understand why reverse synthesis is useful, we want to further show that this OOD detector is not useful for other synthesis methods. First of all, SynthER only performs transition-level synthesis, which inherently does not need to control the generation length. Also we want to emphasize that with forward synthesis like ATraDiff, generating trajectories going out of the offline data distribution is more risky, as transitions going from inside to outside may lead to performance degradation. To validate this, we show the results of combining ATraDiff with an OOD detector of different thresholds. From the results below, we found that those forward synthesis methods derive limited benefits from the OOD detector, which validated the unique effectiveness of reverse synthesis in our framework.\\n| | RTDiff (Ours) | ATraDiff | ATraDiff with dis_m = 1.0 | ATraDiff with dis_m = 1.3 | ATraDiff with dis_m = 1.5 | ATraDiff with dis_m = 2.0\\n| --- | ----- | ---- | ---- | ---- | ---- | ---- |\\n|maze2d-umaze|12.3 | 7.1 | 7.4 | 7.0 | 6.3 | 6.0 |\\n|maze2d-medium |8.3 |6.2 | 6.6 | 6.3 | 5.6 | 5.4 |\\n|maze2d-large | 11.3 | 7.8 | 7.7 | 7.8 | 7.4 | 7.1|\\n2. **Incorporating noise control with other baselines**\\n- Our proposed noise control is largely independent of the specific generation method and can be applied to any data augmentation approach using a diffusion-based framework. It demonstrates particularly effective when the number of examples is limited. Notably, this technique is also applicable to SynthER and ATraDiff, as demonstrated by the results shown below. We use SynthER and ATraDiff to generate both 1M data with and without the random generation. The results show that the noise control can consistently improve the performance.\\n| | SynthER with Noise control| SynthER with random generation | ATraDiff with Noise control | ATraDiff with random generation |\\n| ------ | ------------------ | ---- | ---- | ---- |\\n| maze2d-umaze | 2.3 | 1.7 | 3.1 | 2.8 |\\n| maze2d-medium | 0.6 | 0.3| 1.6 | 1.2 |\\n|maze2d-large | 7.3 | 6.3 | 8.1 | 5.2 |\\n3. **Explanation of illustration examples**\\n- Thanks for the valuable comment. We first sincerely apologize for any confusion and would like to clarify that the reviewer may have misunderstood our illustration example. The results presented in Table 7 were generated in the Maze2D-large environment, not in the environment depicted in Figure 2. \\n- The trajectories depicted in Figure 2 are intended to intuitively illustrate the behavioral differences between reverse synthesis and normal forward synthesis, highlighting the benefits of reverse synthesis. These trajectories are purely for illustrative purposes and were not generated by the models. \\n- To better showcase the real generated trajectories in the environments investigated in this paper, we have included a new visualization of the generated trajectories in the Maze2D environment in Appendix D of the updated version of the paper.\\n4. **Why normal synthesis with OOD detector has more In2Out transitions?**\\n- Thanks for the question. This is because the generation length in normal synthesis without an OOD detector is set to be a fixed length, which must be kept shorter than the average generation length with an OOD detector to avoid performance decrease from generating risky trajectories. In contrast, forward synthesis with an OOD detector allows for an increased average generation length, because the OOD detector can avoid generating too long trajectories. Therefore, forward synthesis with an OOD detector can generate some trajectories different from the offline data distribution. As demonstrated by the results in Table 2, forward synthesis with an OOD detector can also produce more transitions of Out2Out and significantly fewer In2In transitions which might be redundant.\"}", "{\"comment\": \"Thank you for your detailed response, which thoroughly addresses my concerns. I am willing to raise my score to 6.\"}", "{\"title\": \"Further response to Reviewer exWn\", \"comment\": \"Dear Reviewer exWn,\\nThank you for taking the time to review our manuscript and for your thoughtful feedback. We have updated the PDF to include the additional results requested in your previous response. We hope these results, along with our clarifications, address your concerns and provide further support for a re-evaluation of the paper.\\nIf you have any further questions or require additional clarification, we would be more than happy to address them.\"}", "{\"title\": \"Response to Reviewer FUCB (part 2/2)\", \"comment\": \"4. **Q3: Quantitative evaluation of generated trajectories**\\n- Thanks for your valuable comments. We have included this in Appendix C.6 of the updated version of the paper, where we present the quantitative evaluation of generated trajectories.\\n- Below we show the average model error of the generated trajectories. To measure the model error of the generated samples, we calculate the normalized error between the synthesized states and the real states after transition, which is defined as $(T(s,a) - s')^2$ for a transition $(s, a, s')$. As expected, the results show that RTDiff does not aim to generate more realistic trajectories, but rather to produce more diverse samples that lie outside the distribution, thereby benefiting the RL performance. This is because RTDiff generates adaptive, longer trajectories compared with other baselines, attributed to our proposed OOD detector and reverse synthesis model.\\n- Therefore, we would like to further emphasize that, while we agree that the quality is an important factor in assessing data generation, our focus here is more on the \\\"usefulness\\\" of the generated data, specifically how it improves RL performance.\\n| | RTDiff | SynthER | ATraDiff |\\n| ------ | ------------------ | ---- | ---- |\\n|maze2d-umaze| 0.05 | 0.02 | 0.03 |\\n|maze2d-medium | 0.06 | 0.03 | 0.03 |\\n|maze2d-large | 0.11 | 0.07 | 0.08 |\"}", "{\"summary\": \"To address the distribution shift issue in offline reinforcement learning, this paper proposes a novel diffusion-based data augmentation technique, namely RTDiff, where rather than generating forward trajectories, the reverse one is synthesized, which is also the paper's main contribution. Furthermore, the performance of RTDiff is enhanced with the introduction of trajectory length control and noise management. Experimental results show the effectiveness of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The ideas in this paper are interesting and novel, where to my knowledge, this is the first work that utilizes the concept of generating the reverse trajectories to address the distribution shift issue. The paper is clearly written and well-motivated. The effectiveness of the proposed method is verified in various environments, and ablation studies are also conducted to validate the effectiveness of different components of the proposed method.\", \"weaknesses\": \"The main concern of the paper is whether the reverse synthesis can actually address the issue of distribution shift. As it is not state clearly whether the OOD detector is incorporated with other data augmentation baselines, e.g., SynthER and ATraDiff, it is not very certain whether the better performance achievement of RTDiff is due to the reverse synthesis or the using of the OOD detector. In Section 6, an analysis is conduct by using a simple illustrative environment to show why reverse synthesis avoids issues present in normal synthesis. However, on the one hand, it is better to directly use an environment used in the experiments for the analysis, on the other hand, it is confusing why the reverse synthesis generates trajectories that move from the dangerous area to the upper or lower areas while normal synthesis generates trajectories that start from the lower area and enter the middle dangerous area. Do these two approaches both start from a state in the offline data, and then generate the trajectories in different ways? If the OOD detector is used in the normal synthesis, can the dangerous areas also be avoided? Moreover, the theoretical contribution of the proposed is not very significant.\\nIf the concerns can be addressed, I would like to raise my score.\", \"questions\": \"1. Are the OOD detector and noise management incorporated with other data augmentation baselines, e.g., SynthER and ATraDiff?\\n\\n2.In Section 6, a very specific environment is adopted to show the advantages of reverse synthesis over normal synthesis, what is the reason/possible explanation that normal synthesis with OOD detector even produces significantly more In2Out than normal synthesis without OOD detector? (18.2%11.2)\\n\\n3. For other questions, please see questions raised in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the authors' detailed reply. I have no further questions and I'm willing to raise my score.\"}", "{\"summary\": \"Traditional offline reinforcement learning methods often introduce conservative biases to limit exploration to familiar regions, but this can restrict an agent's ability to generalize. While recent approaches use generative models to expand offline datasets, they can overestimate synthesized data, particularly when it includes out-of-distribution samples. RTDiff is introduced to address this, which is a diffusion-based data augmentation technique that creates trajectories in reverse, moving from unknown to known states. This reverse approach reduces the risk of overestimation by ensuring the agent avoids planning through unfamiliar regions. It also supports generating longer trajectories, utilizing diffusion models effectively while maintaining reliability. RTDiff further optimizes the process with flexible trajectory length control and noise management to improve generation efficiency.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The reverse generation method, which has been empirically verified, effectively reduces the augmentation of data in risky regions. This concept is intuitively illustrated in the diagram.\", \"weaknesses\": \"1.Previous research [1] introduced a reverse data generation approach using a transition model. In the current work, the vanilla model is replaced with a diffusion model, yet the fundamental concept remains unchanged, limiting the overall contribution.\\n\\n2.The explanation relies on an intuitive diagram, but it would be more effective to demonstrate several specific cases. Identifying which states are risky. For example, some states are prone to be overestimated and easily generated by a forward model, but the reverse model effectively avoids generating them.\\n\\n3.Minor errors are present, such as in line 156, where \\\"dat\\\" should be corrected to \\\"data.\\\"\\n\\n4.Environments based on visual data should be included in the analysis.\\n\\n[1] Wang, Jianhao, et al. Offline reinforcement learning with reverse model-based imagination.\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
0F1rIKppTf
Through the Looking Glass: Mirror Schrödinger Bridges
[ "Leticia Mattos Da Silva", "Silvia Sellán", "Justin Solomon" ]
Resampling from a target measure whose density is unknown is a fundamental problem in mathematical statistics and machine learning. A setting that dominates the machine learning literature consists of learning a map from an easy-to-sample prior, such as the Gaussian distribution, to a target measure. Under this model, samples from the prior are pushed forward to generate a new sample on the target measure, which is often difficult to sample from directly. In this paper, we propose a new model for conditional resampling called mirror Schrödinger bridges. Our key observation is that solving the Schrödinger bridge problem between a distribution and itself provides a natural way to produce new samples from conditional distributions, giving in-distribution variations of an input data point. We show how to efficiently solve this largely overlooked version of the Schrödinger bridge problem. We prove that our proposed method leads to significant algorithmic simplifications over existing alternatives, in addition to providing control over conditioning. Empirically, we demonstrate how these benefits can be leveraged to produce proximal samples in a number of application domains.
[ "entropic optimal transport", "schrödinger bridge", "stochastic differential equations", "sampling" ]
Reject
https://openreview.net/pdf?id=0F1rIKppTf
https://openreview.net/forum?id=0F1rIKppTf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zo7dOroGyK", "ypTgT9UrrK", "yGPQKud2YT", "xuxhCMKSTj", "v8lZb38c0r", "utiamOtoiW", "twftXwY6CK", "oqWZLwPM69", "oWC7QbhbtR", "loYl00aOra", "kHNQm9Q2wd", "jp9aYtVApC", "jE08WJy6P2", "id1ppYvPpj", "hwIRl4GAmj", "hgh5xpR3zo", "hcfxPKJemV", "fLQunVf1OA", "eaHwHFGWOv", "awhJyA1Ire", "WrGJqFE2mY", "Vtooh5JEWt", "RLV0JUrqUt", "RIP9h35PMs", "R7KYhJNCxN", "R1usbbv7EV", "EsNaYy9PVM", "9LgXXHY6ii", "7Xvf2Ph3xd", "6CAfb2B2D7", "5bjqv2R9GZ", "0o1ZtBtTPg" ], "note_type": [ "official_comment", "official_comment", "decision", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732556429029, 1731721329115, 1737524197021, 1730678730200, 1734641617676, 1732642987221, 1732555874208, 1731720971338, 1731721673206, 1732649267187, 1738795941148, 1732231743862, 1732421134812, 1731720212852, 1732421045173, 1732559449965, 1730299396061, 1730716882438, 1733127513022, 1732232660659, 1732421302916, 1732303509750, 1733282446566, 1732303664489, 1732555598989, 1731720796298, 1732555742247, 1732421095000, 1732760956399, 1730575870112, 1732556014836, 1733177613192 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12518/Reviewer_Rjzk" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12518/Reviewer_gCiH" ], [ "ICLR.cc/2025/Conference/Submission12518/Area_Chair_EFLE" ], [ "ICLR.cc/2025/Conference/Submission12518/Reviewer_yMTk" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "~Leticia_Mattos_Da_Silva1" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Reviewer_gCiH" ], [ "ICLR.cc/2025/Conference/Submission12518/Reviewer_yMTk" ], [ "ICLR.cc/2025/Conference/Submission12518/Reviewer_vpEm" ], [ "ICLR.cc/2025/Conference/Submission12518/Reviewer_gCiH" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Reviewer_Rjzk" ], [ "ICLR.cc/2025/Conference/Submission12518/Authors" ], [ "ICLR.cc/2025/Conference/Submission12518/Reviewer_vpEm" ] ], "structured_content_str": [ "{\"title\": \"Thank you\", \"comment\": \"Thank you for all your efforts; I will take a careful look at the updated draft and consider updating my score by the end of the period.\"}", "{\"comment\": \"We appreciate your thoughtful comments and suggestions! We look forward to improving our submission based on your feedback. Attached is a PDF that reflects changes addressing your concerns. Below, we respond to each of your comments and propose experiments that will be posted within the next few days. We believe that these address all concerns raised in your original review, but **please let us know as soon as possible if there are any other experiments or clarifications needed to strengthen our case**. We are happy to follow up with you and confident that we can address all of your concerns within the rebuttal period.\\n\\n> \\u201c[...] there is little comparison between MSB and DSB or DSBM in terms of quantifying sample versatility (they performed some comparisons in the Gaussian case, but these are far from conclusive).\\u201d\\n\\nWe strongly agree with your suggestion! We propose adding a comparison to baseline methods for image resampling. Specifically, we will post an experiment within several days that compares results in Section 5 for 2D Datasets and Image Resampling to other methods, such as DSB or DSBM. **Please let us know if this would suffice to address this concern.**\\n\\n> \\u201c[...] in Figure 3: is there a certain \\u03c3\\u22c6 after which the data generated by the MSB changes classes? This is likely hard to prove theoretically, but knowing if there exists some threshold after which data stops being the same class would be very interesting.\\u201d\\n\\nThis is a very interesting question! We added this to our conclusion in Section 6 Lines 538-539 as a direction for future research. This is a hard question from both a theoretical and empirical standpoint. For instance, consider the MNIST classes. Our method essentially flows samples out of the manifold and back to it; the new sample \\u201clands\\u201d at a distance proportional to the noise value. A threshold for changing class, however, would depend not only on this relationship between noise and spread of data, but also on the location of the initial sample relative to the boundary between classes in the data manifold.\\n\\n> \\u201cLine 053: maybe write what \\\"\\\\delta\\\"-measure means (or just say Dirac measure). The paragraph just above figure 3 is very unclear. I'm not sure what the rows and columns are meant to refer to here..\\u201d\\n\\nWe have modified our original submission to clarify these two concerns. We believe these are fully addressed in the current version of the PDF. Please refer to the text added in Orange (labeled \\u201cFIX\\u201d) in Line 53 (Introduction) and in Lines 508-510 (Section 5 for Image Resampling). Thank you for bringing this to our attention!\\n\\n> \\u201cEquation (24-25) in the work by Feydy et al. (2019) precisely proposes some kind of fixed-point equation on one potential function (whereas for entropic OT between two measures, there are typically two potentials to optimize over via the Sinkhorn algorithm)\\u201d\\n\\nWe have added a comment in Section 4 Lines 193-196 to highlight this connection. Our original submission cites Feydy (2019) in Section 2, but we agree that this connection can be more clearly highlighted to readers when discussing the single drift function in our method. We would like to point out that Feydy (2019) considers leveraging the symmetry of the transport problem in the static case. When considering the SB problem, an approach needs to be developed for the dynamical formulation in the language of path measures, which differs from theirs (and, similarly, from that of Kurras (2015)). This is one of the main contributions of our work. We believe the addition made in the PDF addresses this comment. Thank you for bringing this up!\\n\\n> \\u201cIs there any hope to provide a rule for choosing the amount of noise added in the process of generating images?\\u201d\\n\\nThere seems to be little hope of providing a rigorous rule, but our experiments suggest some general patterns. The relationship between the noise value and the spread of samples we presented for the Gaussian case in Section 4.4 provides some intuition as to why it is not possible to establish a general rule. In particular, the spread of samples in the Euclidean sense is directly proportional to the value of sigma. The constant of proportionality depends on variable parameters, including choice of dataset, choice of initial drift, and number of timesteps.\\n\\n> \\u201cThe choice of OU process appears entirely arbitrary. Why not consider standard Brownian motion as the reference process?\\u201d\\n\\nThis is a theoretical requirement. The prior needs to be a time-symmetric measure for the bridge to be time-symmetric. Standard Brownian motion is not time-symmetric, and the OU process was chosen for this reason. Time-symmetry is one of the key properties that the derivation of our algorithm hinges on. \\nWe appreciate you bringing this to our attention and we have highlighted this requirement in Section 4.1 Lines 221-222 to clarify its importance to readers. Further theoretical details can be found in Agarwal (2024), which is cited in our original submission.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces the mirror Schr\\u00f6dinger bridge, a method for addressing the resampling problem when the initial and target distributions are identical. Unlike traditional Schr\\u00f6dinger bridges, which are designed for mapping between two distinct distributions, the mirror Schr\\u00f6dinger bridge is formulated specifically for self-mapping within a single distribution. This unique approach facilitates the generation of in-distribution variations of data points, allowing for conditional resampling that maintains the original distribution\\u2019s integrity.\\n\\nThe authors develop a theoretical foundation for this method, employing time symmetry and the Alternating Minimization Procedure to establish convergence in the total variation metric, even for infinite-dimensional state spaces. This achievement addresses the challenging issue of convergence in high-dimensional settings. Additionally, the algorithm capitalizes on the time symmetry inherent in the problem, enabling it to model the diffusion drift with a single neural network. This innovation significantly reduces computational costs, effectively halving the effort compared to Iterative Proportional Fitting Procedure based approaches.\\n\\nEmpirical evaluations underscore the practical value of the mirror Schr\\u00f6dinger bridge across diverse applications, highlighting its capability to produce high-quality proximal samples that are valuable for tasks like data augmentation and generative modeling. In summary, this research claims to provide a theoretically rigorous and computationally efficient solution for conditional resampling within the same distribution, combining solid theoretical contributions with practical algorithmic advancements.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper introduces the mirror Schr\\u00f6dinger bridge framework, an approach that differs from traditional Schr\\u00f6dinger bridges by focusing on mapping a distribution onto itself rather than between distinct distributions. This self-mapping approach directly addresses the challenge of conditional resampling within the same distribution, opening up new possibilities for generating in-distribution variations of data points. By incorporating time symmetry and the Alternating Minimization Procedure (AMP) to establish theoretical foundations, the paper presents an innovative solution to resampling. The algorithm also leverages time symmetry to train a single neural network for modeling the diffusion process drift, enhancing computational efficiency.\\n\\nThe paper includes solid theoretical foundations and provides comprehensive convergence proofs in the total variation metric, even in infinite-dimensional state spaces. The AMP is carefully developed and shown to converge to the mirror Schr\\u00f6dinger bridge, ensuring methodological consistency. The algorithm\\u2019s implementation is efficient, theoretically reducing computational overhead by half compared to Iterative Proportional Fitting Procedure (IPFP)-based methods. Empirical evaluations across several applications support some theoretical claims, demonstrating the method\\u2019s capability to generate high-quality proximal samples for tasks like data augmentation and generative modeling. The organized presentation of theoretical and empirical findings underscores the paper\\u2019s contribution to the field.\\n\\nAdditionally, the paper is well-written and structured, guiding the reader through complex concepts with clarity. The transition from theoretical foundations to practical algorithmic details is seamless, ensuring a coherent flow. Most definitions and problem formulations are clearly presented. Explanations of AMP and iterative schemes enhance understanding, while algorithmic pseudocode supports practical comprehension.\", \"weaknesses\": \"While the paper introduces mirror Schr\\u00f6dinger bridges, the framework retains substantial mathematical similarities to traditional Schr\\u00f6dinger bridges. Could you clarify the key mathematical distinctions between the two approaches? The theoretical analysis would benefit from an error analysis for the Alternating Minimization Procedure (AMP) outlined in equations (4) and (5), which would provide valuable insights into the convergence rate of the proposed approach. The absence of such an analysis limits our understanding of the efficiency and accuracy of the AMP.\\n\\n\\n\\nOn the practical side, the algorithm does not include comparisons with other established approaches in the literature, nor is there any discussion regarding the impact or choice of specific discretization methods, such as the Euler-Maruyama scheme, in addressing this problem. Furthermore, the paper lacks a detailed explanation of all the Figures, which hinders the reader's ability to assess how the proposed method performs relative to existing methods and to understand the extent to which halving the iterations reduces runtime quantitatively. \\n\\nThe paper also lacks a quantitative flow for evaluating the equality of resampling across examples and fails to specify metrics used to assess performance. An empirical definition of \\\"proximity\\\" would add clarity. For instance, it is unclear whether a proximity value of 5 remains constant with $\\\\sigma=1$ or changes to 3 with a different $\\\\sigma$, and the way proximity is quantified in empirical examples remains vague. Although the paper claims validation across various application domains and provides some information on the experimental setup and datasets, it lacks sufficient detail on the specific metrics used for performance assessment, making it difficult to evaluate the method's effectiveness and robustness relative to existing techniques. \\n\\nFinally, while the paper emphasizes algorithmic simplifications that reduce computational costs by training only one neural network, it does not address the scalability of the method to high-dimensional data. The absence of any analysis on how the method performs as the data dimensionality increases leaves questions about its applicability to high-dimensional settings, which are increasingly relevant in practical applications.\", \"questions\": \"1. **Error Analysis for AMP**: The paper introduces the Alternating Minimization Procedure (AMP) in Equations (4) and (5) but lacks an error analysis. Could you elaborate on the convergence speed of AMP? Specifically, are there theoretical bounds or guarantees on the convergence rate that would enhance the theoretical foundation of your method?\\n\\n2. **Benchmarking Against Existing Methods**: Could you clarify which algorithms you compared with your proposed method for each example case in Section 5? Benchmarking against established methods would help situate your approach within existing literature.\\n\\n3. **Choice of Euler-Maruyama**: Your algorithm employs Euler-Maruyama discretization. How does this choice impact the accuracy and efficiency of solving the Schr\\u00f6dinger bridge problem? Have you considered alternative discretization schemes, and how do they compare in terms of performance?\\n\\n4. **Iterations vs. Runtime**: How does halving the number of iterations quantitatively affect running time? A breakdown of runtime reductions relative to iteration count would provide a clearer picture of the efficiency gains.\\n\\n5. **Quantitative Metrics and Proximity Definition**:\\n - **Metrics for Resampling Quality**: The paper lacks specific metrics to assess resampling quality in each example case. What metrics do you use to evaluate how well the method preserves the integrity of the original distribution during resampling?\\n - **Proximity Definition**: An empirical definition of proximity would clarify how closely generated samples align with input data. For example, on MNIST, how would a proximity value of a digit-5 image with \\\\(\\\\sigma=1\\\\) compare to that of a digit-3 image with a different \\\\(\\\\sigma\\\\)?\\n\\n6. **Experimental Setup Details**: Although the paper claims empirical validation across various domains, the experimental setup lacks specificity. Could you provide more details on the datasets, experimental procedures, and metrics used to assess performance?\\n\\n7. **Quantitative Results and Comparisons**: Could you include performance metrics and comparisons that demonstrate the robustness and potential advantages of mirror Schr\\u00f6dinger bridges over existing techniques?\\n\\n8. **Scalability to High-Dimensional Data**:\\n - **Performance on High-Dimensional Benchmarks**: The scalability of your method to high-dimensional data is not discussed. Could you provide insights into its performance and computational complexity on high-dimensional or large-scale datasets?\\n - **Strategies for High-Dimensional Data**: What strategies do you propose to ensure the scalability and efficiency of mirror Schr\\u00f6dinger bridges when applied to high-dimensional data or data on manifolds? Are there modifications or optimizations that could improve performance in these scenarios?\\n\\n9. **Interpretation of Figure 2**: In Figure 2, the resampling estimate with $\\\\sigma=1$ appears to produce more concentrated samples compared to the original samples. Could you explain this behavior? Is there some form of geometric or manifold enhancement occurring in the resampling process?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This work is on the topic of Schr\\u00f6dinger Bridge. The authors propose a new method to solve a special class of Schr\\u00f6dinger Bridge problems where the two marginal distributions coincide. Both theoretical analysis and practical implementation are provided. It is claimed the proposed algorithm can reduce the computational cost of solving this class of Schr\\u00f6dinger bridge problem compared to the standard Sinkhorn algorithm. One major weakness pointed out by the reviewers is the empirical study. The experiments lack high dimensional examples and thorough comparisons to existing methods. A more serious issue is that the proposed method and theoretical result are problematic. In particular, a core discover of this paper summarized in Proposition 3 turns out to be wrong. To see this, one can consider a static discrete Schr\\u00f6dinger Bridge problem where the goal is to find a matrix with given column and row sums and is closest to the prior. In this simple setting, it is easy to see the KKT conditions of these two optimization problems in Proposition 3 are different.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raise some questions on the results as well as the presentation of the paper. The authors reply by modifying the paper, adding experiments in the paper, and adding clarifications in the response. Some reviewers are not convinced. Overall, the reviewers do not seem to be excited about the results in this paper.\"}", "{\"comment\": \"Thank you for your rebuttal and apologies for the delayed response. I still have concerns regarding the empirical evaluation as I discussed in the initial review, even after considering the updated evaluation. Specifically, I am still concerned about the resampling quality even after consulting De Bortoli (2021), I would be curious in seeing alternative reference measures and if there are other relatively efficient processes that could be used improve this quality in followup work. I have adjusted my score accordingly.\"}", "{\"comment\": \"**We hope that your review has been fully addressed with the latest PDF update and the previous responses** we provided in the official comments. **We ask that you please refer to these and consider raising your score.** We greatly appreciate the improvements made to our paper thanks to your review.\\n\\nIn specific, we have addressed your comments by providing several new experiments such as a study of control over proximity, plots demonstrating integrity of initial distribution, and benchmarking against existing methods for image resampling (see Appendix D on pages 17-19). In addition to these, we have also provided answers to questions in the review, and made modifications in the text to reflect these. Please refer to the updated PDF and the previous comments in this thread for further details. If any further comments or questions arise please let us know as soon as possible as the review period is coming to an end.\"}", "{\"comment\": \"Response Thread (2/2).\\n\\n> \\u201cHow does halving the number of iterations quantitatively affect running time? A breakdown of runtime reductions relative to iteration count would provide a clearer picture of the efficiency gains.\\u201d\\n\\nWe propose adding runtimes to the convergence experiment in Figure 1. In addition, the comparison experiment we propose to post within several days should further address this concern for the image resampling case. Please let us know if these additions would fully address your concern. \\n\\nIn principle, for a fixed value of sigma and identical marginal constraints, a single inner iteration of our method should not take longer than a single inner iteration of alternative IPFP-based methods, like DSB.\\n\\n> \\u201cThe paper lacks specific metrics to assess resampling quality in each example case. What metrics do you use to evaluate how well the method preserves the integrity of the original distribution during resampling?\\u201d\\n\\nThe typical metric to assess resampling quality for the image generation case is the FID score. This information is already included in our original submission in Figure 4 and we have added an explanation in Section 5 for Image Resampling Lines 510-512 to make it clearer to readers. \\n\\nMoreover, we are proposing to do an experiment where we run competing algorithms (e.g., DSB) to address another comment and we will report the FID scores for those. We believe that this will further address this comment.\\nFor the 2D datasets, we computed Chamfer distances as an in-distribution metric for the generated samples. We will add a table with these results to address this concern for the 2D examples.\\n\\n> \\u201cAn empirical definition of proximity would clarify how closely generated samples align with input data. For example, on MNIST, how would a proximity value of a digit-5 image with ($\\\\sigma=1$) compare to that of a digit-3 image with a different ($\\\\sigma$)?\\u201d\\n\\nWe agree that this would add clarity to the notion of proximity. We propose to add a table with the distance measured using a similarity metric between the initial sample and the two generated samples with different values of sigma and will follow up soon with this table included in our revision.\\n\\n> \\u201cAlthough the paper claims empirical validation across various domains, the experimental setup lacks specificity. Could you provide more details on the datasets, experimental procedures, and metrics used to assess performance?\\u201d\\n\\nWe would like to clarify that details on the experimental setup are included in our original submission in Appendix C Implementation Details. In addition, our response to a prior question about metrics should clarify the latter half of this question, i.e., which metrics have been included in the original submission and which metrics we have added to the revision.\\n\\n> \\u201cIn Figure 2, the resampling estimate with $\\\\sigma=1$ appears to produce more concentrated samples compared to the original samples. Could you explain this behavior? Is there some form of geometric or manifold enhancement occurring in the resampling process?\\u201d\\n\\nThat\\u2019s an excellent question! To gain intuition about this phenomenon, we can look at our results for Gaussian transport in Figure 1. While our method approaches the mean of the closed-form solution with small noise, it initially approaches the ground-truth variance and then moves slightly below it as the iterations increase. We expect that the higher concentration of samples you noticed in the lower-dimensional case with classical distributions in Figure 2 is a result of the same phenomenon, which was incidentally also observed in De Bortoli (2021).\"}", "{\"comment\": \"We appreciate your feedback. Attached is a PDF that reflects changes addressing your concerns. Below, we respond to each of your comments and propose experiments that will be posted within several days. We believe that these address all concerns raised in your original review, but **please let us know as soon as possible if there are any other experiments or clarifications needed to strengthen our case**. We are confident that we can address these during the rebuttal period.\\n\\n> \\u201cMy main concerns come with the empirical evaluation of the method. [...] In the cases of low $\\\\sigma$, the variation is very small compared to the initial condition. This is likely because the method is estimating a map between itself that is effectively an OU process. Furthermore, when the \\u03c3 is large, the methods appear to be much more corrupt and lose some of the important features.\\u201d\\n\\nWe would like to point out that despite limited computational resources, our method still produces promising results. As evidence of this, we refer the reviewer to the results presented in De Bortoli (2021), specifically in Figure 4. Their results show significant image degradation, but subsequent papers showed higher resolution results when using significantly more computational resources. Similar to these works, we expect our method to produce more competitive results if given increased computational resources.\\n\\n> \\u201cWhen training, how are the data split? Is there any pairing done between the data points? How would that affect the mapping between the two sets.\\u201d\\n\\nThere is no pairing done between data points. In regards to the training data, we split the train labeled portion of each dataset into shuffled batches. Training samples are refreshed after a fixed number of iterations. Further specification on batch size, number of samples, and other data splitting parameters for each example is included in our original submission in Appendix C Implementation Details. The portion of the dataset labeled as test is split into batches used at test time. There is no overlap between train and test data. \\n\\n> \\u201cAre there other mean reverting processes that you can consider other than OU processes? How do these affect the performance of the method?\\u201d\\n\\nYes, there are probably other mean reverting processes that can be considered. The OU process is a natural candidate, but any time-symmetric process can be used. Not every mean reverting process, however, is time-symmetric, and time-symmetry of the prior is a requirement for the derivation of our method. The OU process is arguably the simplest case of a time-symmetric mean reverting process. This allows us to keep the method implementation straightforward at no cost to performance.\\n\\n> \\u201cIs there a set of results one can consider that consist of analyzing the regularity of the path measures?\\u201d\\n\\nThis is an excellent question! We propose adding a time increments experiment within the next few days to show regularity of the path measures. Specifically, one way to analyze the regularity of the path measures is to examine the difference between samples at subsequent time steps. If these differences decrease with time step size, one can reasonably expect regularity of the path measures. Our image resampling data is particularly useful for this experiment since the timesteps are sampled on a schedule rather than uniformly. We will use this data to plot time increment versus time step size. We believe that this experiment and analysis should address this question as well as resolve the related concern raised in the weaknesses section of the review. **Please let us know if this experiment would suffice to clarify this question and strengthen our case.**\"}", "{\"comment\": \"Thank you for your response. We very much appreciate you taking the time to review our modifications and we share your curiosity about this direction for future follow-up work. Thank you for raising your score.\"}", "{\"comment\": \"We thank the reviewers and the AC for pointing this out. We really appreciate the careful review! This is exactly the kind of feedback that makes the peer review process so important. We now recognize the issue with Proposition 3; our proof relied on a shaky proposition stated in past work. We have since developed a better understanding of the problem. We are working on a revised version that corrects this particular issue, and we're excited to modify our approach based on the insight provided and our new understanding. We encourage readers to please stay tuned and check out our revision when it is ready. Thank you again for the valuable review!\\n\\nSincerely,\\nThe Authors.\"}", "{\"comment\": \"We have posted an **updated version of the PDF including new experiments that address concerns in your review**. Please review the addition of Appendix D1 Proximity on line 895 (labeled \\u201cNEW\\u201d in Green) and Figure 8. There, we show an empirical measure of distances and demonstrate how sigma in fact is correlated with this distance metric in the image resampling examples we showed earlier in Figure 2.\\n\\nMoreover, we ask that you please review the addition of Appendix D3 Integrity of Initial Distributions on line 943 (labeled \\u201cNEW\\u201d in Green) and Figure 9, where we present results using Chamfer distance as an in-distribution metric for the pushforward samples of 2D datasets to show that our method preserves the integrity of the original distribution for this case. We believe that this addresses concerns in your original review.\\n\\nOne more experiment comparing our method to alternatives in the image resampling case will be posted in the next day or two. But **in light of the new additions, which address questions in your review, as well as the prior responses provided, we ask you to please consider raising your score**. We will keep you posted on the remaining addition, as soon as it is available, and would appreciate a follow up in the meantime. We appreciate your feedback and are looking forward to receiving a follow up from you soon!\"}", "{\"comment\": \"We have posted **a new PDF update that we believe fully addresses the concerns in your review**. In particular, please refer to Appendix D where we substantially expand on the empirical evaluation of our method. In the newly added Appendix D4, and Figures 10 and 11, you will find a comparison of our method with DSB and DSBM for image resampling. For convenience, we have summarized the conclusions draw from the comparison experiment here:\", \"dataset\": \"MNIST. Task: Image resampling.\\n\\n| **Runtime** | Ours | DSB | DSBM-IPF |\\n|---|---|---|---|\\n|Total | **2.64hrs** | 5.25hrs | 12.47hrs |\\n| Avg. Outer Iter. | **7.94min** | 15.7min | 37.41min |\\n| Avg. Inner Iter. | 0.059s | **0.055s** | 0.209s |\\n|Avg. Inference | 2.009s | 1.554s | **1.002s** |\\n\\nFID (Iteration 20) | Ours | DSB | DSBM-IPF |\\n|---|---|---|---|\\n| | 135.4 | N/A | N/A |\\n| Backward model| N/A | 93.65 | 56.32 | \\n| Forward model | N/A | 144 | 98.89 |\\n\\n**Overall, in this particular experiment, we see that our method makes a trade-off between a small reduction in sample quality for a significant speed-up in training, while also preserving the time-symmetry of the solution.**\\n\\nWe have also tested the same experiment using the CelebA dataset and observed issues with mode collapse with one of the alternative methods. Given the short discussion window and to be fair to the method in question, we have omitted these results but would like to run more ablation tests to possibly include these for the camera-ready version of our paper.\\n\\n**We believe that your review has now been fully addressed with the new additions and ask that you please consider raising your score.** We are thankful for the suggestions and comments made in your review, and we believe our current version of the paper has been much improved by addressing them!\"}", "{\"comment\": \"We appreciate your thoughtful feedback and suggestions. We are eager to improve our paper and confident that we can address your concerns within the rebuttal period. Below, we respond to each comment and, in some cases, propose to conduct additional experiments within the next few days. We believe these experiments will fully address any remaining concerns. Please also refer to the **attached PDF** reflecting these changes.\\n\\n**Should there be any further experiments that would strengthen our submission or any further concerns left unaddressed, please let us know as soon as possible.**\\n\\n> \\u201cThe theoretical results are limited to asymptotic analysis. The convergence rate is not presented.\\u201d\\n\\nWe agree that this additional analysis would benefit our theoretical results and we appreciate the suggestion. In fact, the convergence rate can be readily obtained from equation (6) in the paper with little further analysis. We have added this result in the proof (and accompanying statement) of Theorem 1 in Section 4.2 Lines 287-290. In brief, the convergence rate is o(1/n), where n is the number of iterates.\\n\\n> \\u201cIn the empirical evaluation, comparison to baseline methods is limited to the Gaussian example in Section 5.1. As a result, it's not clear how MSB compares to other methods in real-world image generation.\\u201d\\n\\nThis is an excellent suggestion. We propose to add a comparison to baseline methods for image generation. Specifically, we will post an experiment within several days that compares results in Section 5 for 2D Datasets and Image Resampling to other methods, such as DSB or DSBM. Please let us know if this would suffice to address your concerns about the empirical evaluation.\\n\\n>\\u201cWhat's the connection and difference between the MSB method and the score-matching strategy (like Song et al. 2021)? What's the performance difference?\\u201d\\n\\nThank you for bringing attention to this point. We have added a discussion on the connection and difference between MSB and score-based generative modeling (SGM) in Section 2 Lines 109-113. There, we point out that unlike SGMs, our method provides a tool to flow an existing sample somewhere else in the same data distribution with control over the spread of the newly obtained sample. In contrast, SGMs flow samples from a Gaussian to the data distribution. While a direct empirical comparison between the two is not appropriate, since these are two fundamentally different problem statements, we agree that this additional discussion improves the paper.\"}", "{\"comment\": \"We have posted **a new PDF update that we believe fully addresses the concerns in your review**. In particular, please refer to Appendix D where we substantially expand on the empirical evaluation of our method. In the newly added Appendix D4, and Figures 10 and 11, you will find a comparison of our method with DSB and DSBM for image resampling. For convenience, we have summarized the conclusions draw from the comparison experiment here:\", \"dataset\": \"MNIST. Task: Image resampling.\\n\\n| **Runtime** | Ours | DSB | DSBM-IPF |\\n|---|---|---|---|\\n|Total | **2.64hrs** | 5.25hrs | 12.47hrs |\\n| Avg. Outer Iter. | **7.94min** | 15.7min | 37.41min |\\n| Avg. Inner Iter. | 0.059s | **0.055s** | 0.209s |\\n|Avg. Inference | 2.009s | 1.554s | **1.002s** |\\n\\nFID (Iteration 20) | Ours | DSB | DSBM-IPF |\\n|---|---|---|---|\\n| | 135.4 | N/A | N/A |\\n| Backward model| N/A | 93.65 | 56.32 | \\n| Forward model | N/A | 144 | 98.89 |\\n\\n**Overall, in this particular experiment, we see that our method makes a trade-off between a small reduction in sample quality for a significant speed-up in training, while also preserving the time-symmetry of the solution.**\\n\\nWe have also tested the same experiment using the CelebA dataset and observed issues with mode collapse with one of the alternative methods. Given the short discussion window and to be fair to the method in question, we have omitted these results but would like to run more ablation tests to possibly include these for the camera-ready version of our paper.\\n\\n**We believe that your review has now been fully addressed with the new additions and ask that you please consider raising your score.** We are thankful for the suggestions and comments made in your review, and we believe our current version of the paper has been much improved by addressing them!\"}", "{\"title\": \"Reviewer's response\", \"comment\": \"Thank you for your time and effort. I will thoroughly review the updated draft and take it into consideration before finalizing my score at the end of the review period.\"}", "{\"summary\": \"The authors consider a modification to the original Schr\\u00f6dinger bridge problem where they consider the marginals to be the empirical measure of the data. This learns a coupling between two sets of the same data. The coupling is designed to optimal in the relative entropy sense. They modify the iterative proportional fitting procedure such that they project in the direction that minimizes the KL divergence in one step and then in the reverse KL divergence in the next step due to the analytical feasibility of the step. They propose a practical algorithm for computing the projections using a change of measure technique and optimizing the drifts.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors provide an interesting perspective on the Schr\\u00f6dinger bridge problem and provide a new technique for fitting a path measure connecting an initial condition given by itself to itself.\\n\\nThe method is fairly straightforward to implement and say to analyze.\\n\\nThe method provides optimality with respect to relative entropy, which is a nice property of the sample paths.\", \"weaknesses\": \"My main concerns come with the empirical evaluation of the method.\\n\\nWhile the method has nice motivation, empirically the results do not seem to be impressive. In the cases of low \\\\sigma, the variation is very small compared to the initial condition. This is likely because the method is estimating a map between itself that is effectively an OU process. Furthermore, when the $\\\\sigma$ is large, the methods appear to be much more corrupt and lose some of the important features. \\n\\nIn general the performance of the method does not seem to be well studied. Since one of the motivations the authors mentioned was based on the path measure being optimal with respect to relative entropy, I would show some of these results on the regularity of the path space.\", \"questions\": \"When training, how are the data split? Is there any pairing done between the data points? How would that affect the mapping between the two sets.\\n\\nAre there other mean reverting processes that you can consider other than OU processes? How do these affect the performance of the method?\\n\\nIs there a set of results one can consider that consist of analyzing the regularity of the path measures?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes mirror Schrodinger bridge (MSB), a model for conditional resampling. An alternating minimization procedure is used to solve for the MSB with a theoretical guarantee. On the empirical side, the MSB method is implemented to sample from both toy distributions and image distribution from the real world.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"By using time-symmetry, MSB only requires a single neural network and half of the computational expense compared to other IPFP-based algorithms.\", \"weaknesses\": \"1. The theoretical results are limited to asymptotic analysis. The convergence rate is not presented;\\n2. In the empirical evaluation, comparison to baseline methods is limited to the Gaussian example in Section 5.1. As a result, it's not clear how MSB compares to other methods in real-world image generation.\", \"questions\": \"What's the connection and difference between the MSB method and the score-matching strategy (like Song et al. 2021)? What's the performance difference?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your thoughtful responses to my concerns and questions. After carefully reviewing your clarifications, I have decided to maintain my original score.\\n\\nI appreciate the effort you have put into addressing the issues. All the best.\\n\\nSincerely,\\n\\nReviewer gCiH\"}", "{\"comment\": \"We have posted an **updated version of the PDF including an experiment that addresses the one remaining concern in your review**, i.e. results on the regularity of path measures. Please review the addition of Appendix D2 Regularity on line 936 (labeled \\u201cNEW\\u201d in Green) and Figure 8. There, we show a metric of total path length over distance for different values of $\\\\sigma$. For a given sample trajectory X_i, we obtain this metric via the inverse ratio of || X_0 - X_{M-1} || (distance) and $\\\\sum$|| X_{k+1} - X_{k} || (total path). Smaller values of this metric loosely indicate regularity of the sample path. Please let us know as soon as possible if any further clarification or specific analysis is needed.\\n\\n**In light of the new additions, as well as the prior responses provided, we ask you to please consider raising your score.** We appreciate your feedback and are looking forward to receiving a follow up from you soon!\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe would like to check in with all of you as we have not received any responses during the discussion period so far and it\\u2019s coming to an end. **We hope that your reviews have been fully addressed with the new PDF update, as well as responses provided in the official comments, and ask that you please consider raising your score.** Should any further clarification or discussion be needed, please let us know as soon as possible. We appreciate all the suggestions and comments in your reviews and believe that our submission has benefited from addressing them. We hope you will consider updating your score in light of these.\\n\\nFor convenience, here is an overall summary of the modifications:\\n* **New results or clarifications to address the theoretical questions**, e.g. derivation of the convergence rate for our algorithm, clarification of choice of prior. These have been reflected with additions throughout the text (labeled \\u201cNEW\\u201d in Green or \\u201cFIX\\u201d in Orange).\\n* **New experiments**, which can be found in Appendix D (labeled \\u201cNEW\\u201d in Green) and substantially expand on the empirical evaluation of our method. These new experiments include a **comparison to alternative methods for image resampling, new evaluation metrics, results on the regularity of sample paths and control over proximity**.\\nFor further details on how these address questions specific to your review, please refer to the individual official comments we have made so far. \\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"One more experiment comparing our method to alternatives in the image resampling case will be posted in the next day or two. We will keep you posted, as soon as it is available, but would appreciate a follow up in the meantime. **Please let us know as soon as possible if there are any other clarifications needed based on the responses we already provided, or if these items are fully addressed.** We appreciate your feedback and are looking forward to hearing back from you soon! **In light of the new additions, which address questions in your review, as well as the prior responses provided, we ask you to please consider raising your score, especially once your review is fully addressed.**\"}", "{\"title\": \"Final remarks to reviewers\", \"comment\": \"We thank the reviewers for their helpful comments and suggestions. As the discussion period comes to an end, we outline the changes made to our original submission and present final remarks to support the consideration of our work:\\n\\n### ***New Experiments (in Appendix D)***\\n\\n*Comparison to alternative methods.* We added an experiment comparing our method to both DSB and DSBM for image resampling. We observe that our method provides a significant speed-up in training, while also preserving the time-symmetry of the solution.\\n\\n*Empirical Notion of Distance.* We added an experiment that shows an empirical measure of distances, as suggested by reviewer gCiH, and demonstrates how larger sigma values correspond to quantitatively farther samples. This new set of results demonstrates that our method provides control over sample distance from input, as claimed in theory.\\n\\n*Regularity of Sample Paths.* As noted by reviewer yMTk, our method \\u201cprovides optimality with respect to relative entropy, which is a nice property of the sample paths.\\u201d To provide quantitative results that analyze the regularity of sample paths, we added an experiment in which we observe that smaller sigmas produce more regular paths.\\n\\n*Integrity of Initial Distribution.* We present new results using Chamfer distance as an in-distribution metric for the pushforward samples of 2D datasets and new results using FID scores for image resampling of MNIST dataset to demonstrate integrity of the original distribution.\\n\\n### ***Other Additions***\\nWe have provided clarifications to address all concerns raised in the initial reviews. A number of these were reflected in the text, including a *proof of rate of convergence for our method* and *discussion on the connection and difference between MSB and score-based generative modeling (SGM)*.\\n\\n### ***Final Remarks***\\n\\nMultiple reviewers acknowledged the computational advantages of our method, with reviewer gCiH saying, \\u201c[Our] innovation significantly reduces computational costs\\u201d and reviewer Rjzk describing this reduction on computational burden as \\u201cappealing.\\u201d These comments were complemented by the new results presented in Appendix D4, where we show a significant reduction in runtime for image resampling when compared to alternative methods. Regarding novelty, reviewer Rjzk notes that our work is \\u201cunlike most of the literature which focuses on the [Schr\\u00f6dinger Bridge] between two distributions,\\u201d and reviewer yMTk says we \\u201cprovide an interesting perspective on the Schr\\u00f6dinger bridge problem.\\u201d Reviewer gCiH noted our work\\u2019s **\\u201ccapability to produce high-quality proximal samples that are valuable for tasks like data augmentation and generative modeling.\\u201d** We believe that these indicate that our work would be a valuable tool for conditional resampling and a welcome addition to the ICLR program.\\n\\nOnce again, we thank the reviewers for their time and hope that the points highlighted above will be taken into consideration. We appreciate the improvements made to our paper based on their feedback. \\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"One more experiment comparing our method to alternatives in the image resampling case will be posted in the next day or two. We will keep you posted, as soon as it is available, but would appreciate a follow up in the meantime. **Please let us know as soon as possible if there are any other clarifications needed based on the responses we already provided, or if these items are fully addressed.** We appreciate your feedback and are looking forward to hearing back from you soon! **In light of the new additions, which address questions in your review, as well as the prior responses provided, we ask you to please consider raising your score, especially once your review is fully addressed.**\"}", "{\"comment\": \"**We hope that your review has been fully addressed with the latest PDF update and the previous responses** we provided in the official comments. **We ask that you please refer to these and consider raising your score.** We greatly appreciate the improvements made to our paper thanks to your review.\\n\\nIn specific, we have addressed your comments by expanding the empirical evaluation of our method (see Appendix D on pages 17-19) with an experiment indicating regularity of sample paths and a comparison with alternative methods for image resampling. We have also provided direct answers to questions raised in your review. Please refer to the updated PDF and the previous comments in this thread for further details. If any further comments or questions arise please let us know as soon as possible as the review period is coming to an end.\"}", "{\"comment\": \"Response Thread (1/2). We thank you for your valuable feedback. Below, we respond to each comment, along with proposed additional experiments that we will post within the next few days. Please also refer to the attached PDF reflecting these changes.\\n\\n**If there are further experiments or concerns that require clarification, please let us know as soon as possible.** We are committed to improving our paper and are confident in our ability to address your concerns within the rebuttal period.\\n\\n> \\u201c[...] the framework retains substantial mathematical similarities to traditional Schr\\u00f6dinger bridges. Could you clarify the key mathematical distinctions between the two approaches?\\u201d\\n\\nThank you for highlighting this point. While our method is clearly linked to previous approaches for learning the Schr\\u00f6dinger bridge, such as DSB, our key mathematical insight is that in the case of identical marginals, we can leverage time-symmetry to derive a different projection-type algorithm. While the outer structure of our algorithm retains two projection steps, as is the case with DSB, the projection steps themselves are different. In particular, one of our projection steps is completely analytic and requires no learning to perform. The latter allows us to perform an algorithm with a single variable (instead of two) and obtain a solution more efficiently (with half the training iterations).\\n\\n> \\u201cThe paper introduces the Alternating Minimization Procedure (AMP) in Equations (4) and (5) but lacks an error analysis. Could you elaborate on the convergence speed of AMP? Specifically, are there theoretical bounds or guarantees on the convergence rate that would enhance the theoretical foundation of your method?\\u201d\\n\\nThis is an interesting question, thank you for asking! The convergence rate can be readily obtained from equation (6) in the paper. In short, the convergence rate is o(1/n) where n is the number of iterates. We have added this analysis to the proof (and accompanying statement) of Theorem 1 in Section 4.2 Lines 287-290.\\n\\n> \\u201cCould you clarify which algorithms you compared with your proposed method for each example case in Section 5? Benchmarking against established methods would help situate your approach within existing literature.\\u201d\\n\\nSection 5 compares our method to DSB and MSBM, two algorithms used to learn general Schr\\u00f6dinger bridges. The comparison in the original submission is done for the Gaussian transport case; it can be found in Figure 1. \\n\\nWe agree that further comparison will improve our submission and we appreciate your suggestion. We propose an experiment, which we will post within several days, that compares results in Section 5 for 2D Datasets and Image Resampling to other methods for the image resampling examples. Please let us know if this experiment would suffice to address this comment.\\n\\n> \\u201cYour algorithm employs Euler-Maruyama discretization. How does this choice impact the accuracy and efficiency of solving the Schr\\u00f6dinger bridge problem? Have you considered alternative discretization schemes, and how do they compare in terms of performance?\\u201d\\n\\nEuler-Maruyama (EM) allows us to compute the reverse drift as a regression problem. This approach has the advantage of approximating the drift without the need of computing the score function. Another advantage is that its implementation is straightforward. EM is the standard choice in the Schr\\u00f6dinger bridge and diffusion-based sampling communities for SDE integration.\", \"this_choice_of_discretization_in_principle_might_impact_accuracy_and_efficiency_in_the_following_ways\": \"Since the computation is done via local estimates, the training process can suffer from divergences. This is addressed in De Bortoli (2021) by the implementation of exponential moving averages during training, and the same solution is implemented in our training framework. We have highlighted this in the revised PDF in Appendix C Implementation Details (please refer to the text added in Green labeled \\u201cNEW\\u201d in Lines 808-809).\\n\\nThere is a tradeoff between computational expense and accuracy. The drift estimates are done locally using finite difference time intervals. Hence, the smaller the time intervals, the better the approximation is. On the other hand, sampling at more time intervals increases the computational cost.\\n\\nThese are common disadvantages shared by alternative frameworks relying on EM discretization, such as De Bortoli (2021), Vargas (2021), and Winkler (2023). A broad study comparing SDE integrators\\u2019 effects on performance of Schr\\u00f6dinger bridge and diffusion models is an interesting topic for future work but outside the scope of our current study.\\n\\n**Please refer to the next Comment, where our response continues.**\"}", "{\"comment\": \"**We hope that your review has been fully addressed with the latest PDF update and the previous responses** we provided in the official comments. **We ask that you please refer to these and consider raising your score.** We greatly appreciate the improvements made to our paper thanks to your review.\\n\\nIn specific, we have addressed your comments by providing several new experiments (see Appendix D on pages 17-19) such as comparison with alternative methods for image resampling, and answers to all the questions in your initial review. Please refer to the updated PDF and the previous comments in this thread for further details. If any further comments or questions arise please let us know as soon as possible as the review period is coming to an end.\"}", "{\"comment\": \"We have posted **a new PDF update that we believe fully addresses the concerns in your review**. In particular, please refer to Appendix D where we substantially expand on the empirical evaluation of our method. In the newly added Appendix D4, and Figures 10 and 11, you will find a comparison of our method with DSB and DSBM for image resampling. For convenience, we have summarized the conclusions draw from the comparison experiment here:\", \"dataset\": \"MNIST. Task: Image resampling.\\n\\n| **Runtime** | Ours | DSB | DSBM-IPF |\\n|---|---|---|---|\\n|Total | **2.64hrs** | 5.25hrs | 12.47hrs |\\n| Avg. Outer Iter. | **7.94min** | 15.7min | 37.41min |\\n| Avg. Inner Iter. | 0.059s | **0.055s** | 0.209s |\\n|Avg. Inference | 2.009s | 1.554s | **1.002s** |\\n\\nFID (Iteration 20) | Ours | DSB | DSBM-IPF |\\n|---|---|---|---|\\n| | 135.4 | N/A | N/A |\\n| Backward model| N/A | 93.65 | 56.32 | \\n| Forward model | N/A | 144 | 98.89 |\\n\\n**Overall, in this particular experiment, we see that our method makes a trade-off between a small reduction in sample quality for a significant speed-up in training, while also preserving the time-symmetry of the solution.**\\n\\nWe have also tested the same experiment using the CelebA dataset and observed issues with mode collapse with one of the alternative methods. Given the short discussion window and to be fair to the method in question, we have omitted these results but would like to run more ablation tests to possibly include these for the camera-ready version of our paper.\\n\\n**We believe that your review has now been fully addressed with the new additions and ask that you please consider raising your score.** We are thankful for the suggestions and comments made in your review, and we believe our current version of the paper has been much improved by addressing them!\"}", "{\"comment\": [\"As a reminder, we posted an **updated version of the PDF with several additions that we believe fully address the concerns and questions in your original review.** **We kindly ask that you please raise your score in light of these.** If you have any further questions, please let us know as soon as possible. We appreciate your feedback and are eager to respond if further clarification is needed. Your original review had three concerns; here\\u2019s a summary of how we addressed all of them in the PDF and previous comments:\", \"**Convergence rate**: We derived the convergence rate for our algorithm and added it to the proof and statement of Theorem 1 in Section 4.2 Lines 287-290. In brief, the convergence rate is o(1/n), where n is the number of iterates.\", \"**Comparison to baseline methods**: We added an experiment in Appendix D4 Lines 959-1017 comparing our method to both DSB and DSBM for image resampling. We observe that our method makes a trade-off between a small reduction in sample quality for a significant speed-up in training, while also preserving the time-symmetry of the solution.\", \"**Clarifying connection to score-based approaches**: We have added a discussion on the connection and difference between MSB and score-based generative modeling (SGM) in Section 2 Lines 109-113. There, we point out that unlike SGMs, our method provides a tool to flow an existing sample somewhere else in the same data distribution with control over the spread of the newly obtained sample. In contrast, SGMs flow samples from a Gaussian to the data distribution.\", \"In addition to modifications directly addressing your review, **we have also added several other experiments to expand the empirical evaluation of our method**, which can be found in Appendix D (pages 17-19).\"]}", "{\"summary\": \"This work studies the Schr\\u00f6dinger bridge (SB) between a distribution and itself, unlike most of the literature which focuses on the SB between two distributions (e.g., the standard Gaussian, and the data distribution). The authors propose this model as a means for conditional resampling of a distribution, where the \\\"noise\\\" induced by the bridge process allows them to obtain new samples which are in-distribution. They demonstrate their approach on many experiments, and have some proofs of their technical results.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper proposes to alleviate some computation burden of training SBs by proposing a learning algorithm that learns the SB between a measure and itself. Still faithful to the generative modeling paradigm, they learn the SB from a data distribution to itself, with the goal of starting at existing samples and generating diverse ones. The writing is relatively clear, and Figure 2 is especially clear at describing the phenomenon of \\\"starting from an existing sample\\\" and, given enough noise, learns to go somewhere else in the distribution.\", \"weaknesses\": \"While the computational burden of having to train two neural networks instead of one is appealing, there is little comparison between MSB and DSB or DSBM in terms of quantifying sample versatility (they performed some comparisons in the Gaussian case, but these are far from conclusive). For instance, in Figure 3: is there a certain $\\\\sigma_\\\\star$ after which the data generated by the MSB changes classes? This is likely hard to prove theoretically, but knowing if there exists some threshold after which data stops being the same class would be very interesting.\", \"questions\": [\"Comments:\", \"Line 053: maybe write what \\\"\\\\delta\\\"-measure means (or just say Dirac measure).\", \"The paragraph just above figure 3 is very unclear. I'm not sure what the rows and columns are meant to refer to here..\", \"The ability to use one neural network is not surprising from the connection between EOT and the SB problem. Equation (24-25) in the work by Feydy et al. (2019) precisely proposes some kind of fixed-point equation on one potential function (whereas for entropic OT between two measures, there are typically two potentials to optimize over via the Sinkhorn algorithm)\", \"Question: Is there any hope to provide a *rule* for choosing the amount of noise added in the process of generating images?\", \"Question: The choice of OU process appears entirely arbitrary. Why not consider standard Brownian motion as the reference process?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**We hope that your review has been fully addressed with the latest PDF update and the previous responses** we provided in the official comments. **We ask that you please refer to these and consider raising your score.** We greatly appreciate the improvements made to our paper thanks to your review.\\n\\nIn specific, we have addressed your concerns by providing a convergence rate for our algorithm, clarifying the difference between our method and SGMs, and including additional experiments such as a comparison to DSB and DSBM for image resampling (see Appendix D on pages 17-19). For more details on modifications made to address your initial review, please refer to the previous official comments in this thread. If any further comments or questions arise please let us know as soon as possible as the review period is coming to an end.\"}", "{\"comment\": \"Thank you for your clarification and efforts, I will maintain my score.\"}" ] }
0EP01yhDlg
Faster Language Models with Better Multi-Token Prediction Using Tensor Decomposition
[ "Artem Basharin", "Andrei Chertkov", "Ivan Oseledets" ]
We propose a new model for multi-token prediction in transformers, aiming to enhance sampling efficiency without compromising accuracy. Motivated by recent work that predicts the probabilities of subsequent tokens using multiple heads, we connect this approach to rank-1 canonical tensor decomposition. By generalizing it to a rank-r canonical probability decomposition, we develop an improved model that predicts multiple tokens simultaneously. This model can also be interpreted as a mixture of experts, allowing us to leverage successful techniques from that domain for efficient and robust training. Importantly, the overall overhead for training and sampling remains low. Our method demonstrates significant improvements in inference speed for both text and code generation tasks, proving particularly beneficial within the self-speculative decoding paradigm. It maintains its effectiveness across various model sizes and training epochs, highlighting its robustness and scalability.
[ "Large language model", "Self-speculative decoding", "Multi-token prediction", "Low-rank approximation" ]
Reject
https://openreview.net/pdf?id=0EP01yhDlg
https://openreview.net/forum?id=0EP01yhDlg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wxiVDwMgAX", "oAn0PDmERG", "ejDUwXJB41", "dzupEXEHLz", "dWSolYumnl", "dBhnwEgwq5", "d7B4RWnVGJ", "TuIcKAfsKU", "GZ9ZO0Iyk9", "DyZM7PyrRG", "DlXcLDtHv8", "3sdbr0WjjO", "1cx9Eaufbb" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_review", "official_review", "official_comment", "meta_review" ], "note_created": [ 1732402285804, 1732626627616, 1733173425360, 1731139521014, 1732553823546, 1732402929178, 1732402599119, 1730339862666, 1737524278763, 1730718287744, 1730753470685, 1732403119963, 1734831098693 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13738/Authors" ], [ "ICLR.cc/2025/Conference/Submission13738/Reviewer_MLew" ], [ "ICLR.cc/2025/Conference/Submission13738/Reviewer_5c1w" ], [ "ICLR.cc/2025/Conference/Submission13738/Reviewer_MLew" ], [ "ICLR.cc/2025/Conference/Submission13738/Reviewer_8dbJ" ], [ "ICLR.cc/2025/Conference/Submission13738/Authors" ], [ "ICLR.cc/2025/Conference/Submission13738/Authors" ], [ "ICLR.cc/2025/Conference/Submission13738/Reviewer_Juyo" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13738/Reviewer_5c1w" ], [ "ICLR.cc/2025/Conference/Submission13738/Reviewer_8dbJ" ], [ "ICLR.cc/2025/Conference/Submission13738/Authors" ], [ "ICLR.cc/2025/Conference/Submission13738/Area_Chair_93L7" ] ], "structured_content_str": [ "{\"comment\": \"Dear reviewer, Thank you very much for analyzing our work and formulating the specific questions you asked! Below are our responses to your questions.\\n\\n1. Experiments are done on small datasets and small models...\\n\\nTo address your concerns, we've conducted an additional set of experiments on a 1B model using fineweb as a dataset. The results are in the attached table. We've generated 2048 tokens 10 times and averaged the results between different runs.\\n\\n| Generation of 2048 tokens | Time (s) | Amnt of parameters (B) | Average number of accepted tokens |\\n|------------------------|-----------|---------------|------------|\\n| base | 43.8 | - | - |\\n| rank 1 | 34 | 0.98 | 1.58 |\\n| rank 2 | 28.9 | 1.2 | 1.96 |\\n| rank 4 | 31.5 | 1.9 | 1.85 |\\n\\n\\n\\n2. ... little else is provided aside from loss curves of training runs and token acceptance rates for the scheduled sampling approach...\\n\\nWe've also provided time per token for the tinystories dataset. Experiments with fineweb also contain time per token. To the best of our knowledge, these are the most relevant metrics for evaluating our approach. If there are other specific metrics you'd like us to consider, we would be happy to explore them.\\n\\n3. ... performance of these models on various benchmarks to estimate the quality of these trained models ...\\n\\nWe've trained different models on different datasets to ensure the compatibility of our method with different tasks. Overall, our method is a generalization of the method from [1], so we compare the performance of our novel method with this previous one.\\n\\n4. Comparison to other speculative sampling approaches with various draft models will give a better idea ...\\n\\nAt the current scale, alternative approaches seem faster. However, as the model scale increases, the approach [1] becomes more and more effective. And as our approach has better performance, than then approach from [1], we believe, that our approach also has a good performance scaling with model size.\\n\\n5. There is room for improvement in presentation ...\\n\\n\\nThank you for this feedback! Our goal is to make our paper as accessible, as it is possible. Algorithm 1 mainly describes the compatibility of our new proposed draft model with the existing approach, therefore, the description is not as detailed as in the sources in which the speculative decoding approach was first described. As for Figure 1, we believe, that it answers the question \\\"How n-dimensional probability distribution over $R^n$ is parameterized by the set of $2-$dimensional matrices\\\". We think, that both Algorithm 1 and Figure 1 are essential to the narrative we're constructing in this article. However, we can add additional figures, especially if there are specific issues that can be explained graphically.\\n\\n\\nWe hope our responses address your concerns and improve the clarity and completeness of our manuscript. Please let us know if there are any other aspects we should focus on. Thank you again for your valuable feedback!\"}", "{\"title\": \"Thanks for your rebuttal\", \"comment\": \"Hello, thanks for responding with some additional experiments on 1B models. My overall impression about the paper and the empirical comparison remains the same so I am keeping my score unchanged.\"}", "{\"comment\": \"Thanks for your detailed response! The new results on fineweb look promising, and more results of that form would make the paper stronger. I have increased my score to a 5, but I still think that the paper needs better organization and more complete evaluation.\"}", "{\"summary\": \"This paper borrows key idea from Gloecke et al. [1] to train multi-token predictors instead of single next word predictor. This work identifies a key flaw in [1] which is that the distributions for multiple $n$ future tokens are independent of each other thus ignoring the token interdependency. This work interprets this as a rank-1 approximation to the full distribution tensor of $n$ next tokens and proposes to improve this to a higher rank estimate. This higher rank estimate is achieved by $r$ multiple heads defining $r$ different distributions and using their mixture for the $n$-future token prediction. Training and inference method for this is discussed followed by an observation that the multi-token predictor can be used in self-speculative sampling approach where the next word prediction is made faster by using proposal distribution that predicts multiple next tokens. The experiments are mainly performed on nano-GPT model architecture trained on TinyStories dataset and also finetuning the PyCodeGPT model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"-- The paper studies an interesting problem to speed updecoding by predicting multiple tokens in parallel at higher acceptance rates than typical speculative sampling approaches.\\n\\n-- The proposed solution seems straightforward to implement.\\n\\n-- The contribution to identifying issues with existing multi-token training approaches and proposing a higher rank alternative is novel.\", \"weaknesses\": \"-- The evaluation leaves a lot to be desired. Experiments are done on small datasets and small models but more concerningly, little else is provided aside from loss curves of training runs and token acceptance rates for the scheduled sampling approach. As an example, performance of these models on various benchmarks to estimate the quality of these trained models would aid in better assessment of the approach. Also, it is unclear if this approach empirically scaled to larger datasets and models effectively in terms of speed and performance.\\n\\n-- Comparison to other speculative sampling approaches with various draft models will give abetter idea about the improvement on speed and resources with the proposed approach.\\n\\n-- There is room for improvement in presentation. Figure 1 doesn't help with understanding the paper better and is confusing. Algorithm 1 can also be described more clearly. Currently, it hinges on the reader's prior understanding of speculative decoding.\", \"questions\": \"Please address the issues above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response and the clarifications to Table 1 and 3. However, I still think comparisons to other baselines are needed besides just Goeckle et al. 2024 (i.e. rank=1) e.g. EAGLE/Medusa as another other reviewer points out.\\n\\nMoreover, I think the improved speedup over the rank-1 model is also modest overall.\"}", "{\"comment\": \"1. The paper would benefit from more thorough evaluation and stronger results.\\n\\nThank you for the detailed and constructive feedback! In Table 3 in the provided paper we see the calculation of $\\\\alpha$ for different pairs of target and draft models. However, we think, that in our framework calculating $\\\\alpha$ makes no practical sense for two reasons. The first one is the fact, that probabilities of accepting prefixes of different lengths are i.i.d (paragraph under definition 3.1 in the original paper). This assumption makes perfect sense in the case of usual speculative decoding, but this is not true in our case. Secondly, in practice, $\\\\alpha$ allows to calculate of an optimal amount of generated draft tokens. This knowledge is basically useless in our case because once we trained the model with some $d$, it makes no sense to sample less, than $d$ tokens from it. \\n\\nTo address concerns about broader evaluations, we have conducted additional experiments with a 1B parameter model trained on the fineweb dataset. For inference we've generated 2048 tokens 10 times and averaged the results between runs. These results are provided in the updated table below, showcasing our approach's scalability and applicability to non-synthetic datasets.\\n\\n| Generation of 2048 tokens | Time (s) | Amnt of parameters (B) | Average number of accepted tokens |\\n|------------------------|-----------|---------------|------------|\\n| base | 43.8 | - | - |\\n| rank 1 | 34 | 0.98 | 1.58 |\\n| rank 2 | 28.9 | 1.2 | 1.96 |\\n| rank 4 | 31.5 | 1.9 | 1.85 |\\n\\n\\n2. The majority of the experiments section seems to involve analysis rather than results.\\n\\nThank you for that suggestion. We considered such a division but decided to leave the presentation of the results as they are. In our case, dividing the results into parts, as you suggested, would have led to unnecessary fragmentation, which in turn would have made the article more difficult to understand.\\n\\n3. In Figure 3, it seems like hyperparameters are being selected using the test set; I would suggest using a dev set instead.\\n\\nYou're right, it's an oversight on our part. However, we tune this parameter only on the one dataset, and for other tasks, we've used the same penalty on auxiliary loss. Also, the results do not change, when we use the separate dev set.\\n\\n4. To make comparisons fair, I would suggest training each rank for the same amount of wall-clock time, rather than a number of steps, in case higher ranks require more time per forward pass.\\n\\nIn small-scale experiments, we trained until convergence, so adding additional iterations to the base model won't change the result. In any case, we haven't claimed any speedups to the training process itself. \\n\\n\\n5. The self-speculative setup makes the results hard to interpret because each rank uses a different target model. I would suggest that each method be speculative with respect to the same target model.\\n\\nYou're right, technically the statement 'our model accepts more tokens per forward pass than a baseline model' is meaningless. However, specifically for that reason, we provide a chart, which can be seen on the right of Figure 2. Here one can see, that our trained models achieve basically the same loss when we're looking at the first token only. This means, that the target model for each case has the same loss, which, in our opinion makes the comparison fair.\\n\\n\\n6. The paper would be clearer if the experiments were described concretely: for example, the paper states that \\\"Our measurements were made with seq length varying from 1024 to 4096\\\" (lines 408-409), but it's not clear which experiments use which sequence lengths.\\n\\nThank you for highlighting this ambiguity. For the specific example mentioned, we sampled 1,000 random sequence lengths from the interval [1024, 4096] and ran the model on prefixes of these lengths. We will revise the manuscript to describe the experimental setup in more concrete terms for all relevant cases.\\n\\nWe hope these responses address your concerns and clarify the points you raised. Your suggestions have been instrumental in improving the clarity and rigor of our work, and we sincerely appreciate your thorough review. Please let us know if there are additional aspects you would like us to address.\"}", "{\"comment\": \"Dear reviewer, thank you very much for your analysis of our work and the specific notes you formulated! Below we provide our responses to them. We will be happy to answer any additional questions you may have.\\n\\n1 ... how much of a speed up the author's approach gives over both the approach of Goeckle et al. 2024 (i.e. rank=1) and also vanilla non-autoregressive decoding for the same level of quality.\\n\\nThank you for raising this important point. Table 1 in the original submission includes a comparison with the baseline approach. However, we recognize that it lacked a direct comparison with vanilla non-autoregressive decoding. To address this, we have conducted additional experiments using the fineweb dataset, which has this essential comparison. We've trained 1B base model from scratch using different ranks. For inference we've generated 2048 tokens 10 times and averaged the results between runs. The results can be seen in the table below.\\n\\n| Generation of 2048 tokens | Time (s) | Amnt of parameters (B) | Average number of accepted tokens |\\n|------------------------|-----------|---------------|------------|\\n| base | 43.8 | - | - |\\n| rank 1 | 34 | 0.98 | 1.58 |\\n| rank 2 | 28.9 | 1.2 | 1.96 |\\n| rank 4 | 31.5 | 1.9 | 1.85 |\\n\\n\\n\\n2. ... in Table 1 the final column (time per token) is not much different across all the rows?\\n\\nIn this column, we observe a 10 percent speedup when compared to a baseline speculative decoding approach. When compared to the vanilla model, the acceleration is much greater, as you can see in the table above.\\n\\n3. I don't quite understand Table 3.\\n\\nIn this table, we are showing, how replacing a vanilla head with our proposed head affects the performance of one forward pass of the model. As we can see, as the model size increases, the overhead becomes less noticeable. \\n\\n4. ... the authors need additional baselines ...\\n\\nAs our approach is a generalization of the approach, introduced in work [1], we compared our approach only with this baseline.\\n\\n\\n5. ... cite and discuss related work in non-autoregressive decoding\\n\\n\\nThank you for adding the necessary context for our work. We were mainly focused on the task of text generation, so we overlooked those works. We'll cite those works, as they are relevant to our topic.\\n\\n\\n6. How does the method combine with beam search?\\n\\nOur method inherits the same limitations as the approach introduced in [1]. It is fully compatible with commonly used text generation strategies, including sampling with temperature and top-k candidate selection, which we have implemented in our experiments. While we have not yet tested beam search specifically, we see no theoretical obstacles to combining our method with it. We plan to explore this combination in future work.\\n\\n7. Does the speedup increase or decrease as a function of model size?\\n\\nTypically, as the model size increases, the ratio (computational complexity of head)/(computational complexity of the entire model) becomes smaller. This benefits our method, as our modification allows us to make fewer forward passes with the cost of a more computationally expensive head.\"}", "{\"summary\": \"This paper focuses on speculative decoding methods that incorporate additional prediction heads. The authors conceptualize current standard approaches as rank-1 canonical tensor decomposition and propose a generalized method that extends from rank-1 to rank-r canonical tensor decomposition to approximate the joint distribution of future tokens. To enhance model training, an auxiliary loss is introduced to address weight imbalances. Experimental results highlight several key findings:\\n\\n1.\\tIncreasing the ranks results in a decrease in joint loss.\\n\\n2.\\tThe first token appears to have no correlation with different ranks.\\n\\n3.\\tThe method is effective even when only the prediction heads are trained.\\n\\nThe proposed approach achieves notable speedups compared to autoregressive models and rank-1 baselines.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis work identifies the limitations of recent multi-token prediction standards and proposes a more generalized approach.\\n\\n2.\\tThe experimental results demonstrate the method's effectiveness, and the ablation study underscores the importance of the introduced components.\", \"weaknesses\": \"1.\\tThe work lacks comparison with existing state-of-the-art methods such as Medusa, Eagle, etc., which belong to the same research domain.\\n\\n2.\\tIn the code generation setting, the performance of averaging two accepted draft tokens is not promising.\\n\\n3.\\tThere are several typos in this version that need revision.\", \"questions\": \"1.\\tIn line 113, the authors denote the input sequence as x_{t:1} and the corresponding embeddings as e_{t:1}. According to the description, the embeddings are the representations of the final transformer layer, while in Figure 1, the same value is denoted as z_t. Do z_t and e_t means the same representation, or e_t means the \\u201cinput\\u201d embeddings? This notation is somewhat confusing.\\n\\n2.\\tAre there any results on the acceptance rate for Llama 8B, not just inference time?\\n\\n__Typos__:\\n\\n1.\\tIn line 116, a comma is missing before \\\"the conditional probabilities ...\\\".\\n2.\\tIn line 150, \\\"Note, that\\\" should be revised to \\\"Note that\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"One existing form of speculative decoding involves predicting k tokens at a time independently, which can be thought of as a rank-1 decomposition of the k-order joint probability tensor over those tokens. This paper instead proposes to predict the factors for a rank-r decomposition. They evaluate two instantiations of this idea: training a LM from scratch to predict this decomposition, and taking an existing LM and fine-tuning additional heads to predict this decomposition. Their experiments show that higher rank decompositions lead to higher acceptance rates in speculative decoding.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) The method is well-motivated and explained clearly. The connection to MoE, which motivates a load-balancing auxiliary loss, is also interesting.\\n\\n(2) The paper seeks to improve inference speed in large models, which is an important problem.\", \"weaknesses\": \"While the method seems interesting and promising, the paper's experiments seem disorganized and insufficient to fully demonstrate the effectiveness of the method.\\n\\n(1) The majority of the results are for a 56.3M parameter trained on TinyStories, which is a very limited evaluation setting, both because the dataset is synthetic and because the setting involves retraining. There are also some experiments on head-only tuning for PyCodeGPT in Table 3, but the results in that setting are not very strong --- increasing the rank does not actually seem to actually improve inference speed for many of the models. The paper would benefit from more thorough evaluation and stronger results (especially on non-synthetic datasets, and on speeding up existing models rather than requiring retraining: for example, the evaluations done in https://arxiv.org/pdf/2211.17192 (Table 3) would improve this paper).\\n\\n(2) The majority of the experiments section seems to involve analysis rather than results: only tables 1 and 3 report inference times, which are the main results. I would suggest moving other plots (token acceptance rate, first token vs joint loss, etc.) to a separate analysis section.\\n\\n(3) There are a substantial number of issues with the experiment design that would be beneficial to address: (a) In Figure 3, it seems like hyperparameters are being selected using the test set; I would suggest using a dev set instead. (b) To make comparisons fair, I would suggest training each rank for the same amount of wall-clock time, rather than number of steps, in case higher ranks require more time per forward pass. (c) The self-speculative setup makes the results hard to interpret because each rank uses a different target model. I would suggest that each method be speculative with respect to the same target model. (d) The paper would be clearer if the experiments were described concretely: for example, the paper states that \\\"Our measurements were made with seq length varying from 1024\\nto 4096\\\" (lines 408-409), but it's not clear which experiments use which sequence lengths.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"After reading the author response, I thank the authors for providing clarifications to Table 1 and Table 3 and answering some of my other questions. However, I still think it is important to compare to other baselines (e.g. EAGLE, Medusa as another reviewer points out) in addition to Gloeckle et al. 2024. Therefore I am keeping my score the same.\\n\\n----\\n\\nThe paper studies multi token prediction in transformer language models. Vanilla autoregressive degressing is expensive for long outputs since it only decodes one output at a time.\\n\\nThe authors are inspired by the work of Gloeckle et al. 2024. In Gloeckle et al. 2024, given a context x_{t:1} the next n tokens are predicted independently (with multiple heads). As the authors point out, this amounts to a rank-1 tensor approximation of the joint probability distribution.\\n\\nIn this work, the authors explore higher ranks (r > 1) using CP decomposition. They draw a connection to mixture-of-experts and propose a auxiliary load balancing strategy so all the weight is not on one expert (component). \\n\\nThey then perform experiments validating their work.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"-Tackles an interesting and important problem\\n\\n-The method is written clearly. \\n\\n-I also find the connections to tensor decomposition interesting.\", \"weaknesses\": \"Some confusion on experimental results: I'm a bit confused as to how much of a speed up the author's approach gives over both the approach of Goeckle et al. 2024 (i.e. rank=1) and also vanilla non-autoregressive decoding for the same level of quality.\", \"for_example_in_table_1\": \"I see that in Table 1 the final column (time per token) is not much different across all the rows?\\n\\nMoreover I don't quite understand Table 3.\", \"comparisons\": \"I think the authors need additional baselines in addition to just ablations of their own approach from the related work. For example, as another reviewer suggested EAGLE and Medusa:\", \"https\": \"//arxiv.org/abs/2012.15833\", \"related_work\": \"The authors should also cite and discuss related work in non-autoregressive decoding (typically for neural machine translation) that has been developed for a while e.g. see below and citations therein. In particular it would be useful to discuss how the authors' approach compares and contrasts with these works.\", \"questions\": \"-How does the method combine with beam search?\\n\\n-Does the speedup increase or decrease as a function of model size?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"1. The work lacks comparison with existing state-of-the-art methods such as Medusa, Eagle, etc., which belong to the same research domain.\\n\\n\\nWe've considered our method as a generalization of the method, introduced in the paper [1], so we've taken this method as a baseline and made a comparison only with a base model (no speculative decoding) and rank 1 model (model from paper [1])\\n\\n2. In the code generation setting, the performance of averaging two accepted draft tokens is not promising.\\n\\nWe agree that the performance achieved on the PyCode experiment is not particularly strong. However, we would like to emphasize that in this case, we trained only the heads, leaving the main model unchanged. The current results indicate that for the cost of two forward passes, we generate approximately 2.5 tokens, representing a speedup over the vanilla model. While there is room for improvement, we believe this demonstrates the potential of our approach, even in a limited setting.\\n\\n\\n3. In line 113, the authors denote the input sequence as $x_{t:1}$ and the corresponding embeddings as $e_{t:1}$. According to the description, the embeddings are the representations of the final transformer layer, while in Figure 1, the same value is denoted as $z_t$. Do $z_t$ and $e_t$ means the same representation, or $e_t$ means the \\u201cinput\\u201d embeddings? This notation is somewhat confusing.\\n\\nYou are correct, and we appreciate your attention to this detail. $z_t$ and $e_t$ refer to the same object in this case. To avoid confusion, we have updated Figure 1 to use $e_t$ consistently throughout the manuscript.\\n\\n4. Are there any results on the acceptance rate for Llama 8B, not just inference time?\\n\\nThe inference times for Llama are obtained for untrained heads; we have not trained the model for this task as it is computationally expensive. Inference time is presented to estimate the performance of one forward pass of a draft model with our custom head.\\n\\n\\n\\nWe've also conducted an additional set of experiment, that involved training of a 1B model on a fineweb dataset. To obtain the table, which can be seen below, we've generated 2048 tokens 10 times and averaged the results between different runs.\\n\\n| Generation of 2048 tokens | Time (s) | Amnt of parameters (B) | Average number of accepted tokens |\\n|------------------------|-----------|---------------|------------|\\n| base | 43.8 | - | - |\\n| rank 1 | 34 | 0.98 | 1.58 |\\n| rank 2 | 28.9 | 1.2 | 1.96 |\\n| rank 4 | 31.5 | 1.9 | 1.85 |\"}", "{\"metareview\": \"This paper aims to speed up the generation process of language models. Motivated by the observation that predicting multiple future tokens in parallel does not consider inter-token dependencies, and that it can be viewed as rank-1 tensor decomposition, this work proposes to extend that to a rank-r decomposition. Experiments show that the proposed method has higher acceptance rates compared to the baseline.\", \"strengths\": \"1. The proposed idea is well motivated and the extension to rank-r decomposition is neat.\", \"weaknesses\": \"1. Reviewers mentioned that experiments are only conducted on small models with small datasets, but this is addressed by authors' rebuttal.\\n2. Reviewers mentioned that baselines are lacking and this work should be compared to EAGLE and Medusa.\\n3. Some reviewers mentioned that the presentation of this paper can be improved.\\n\\nOverall, the idea is very interesting and the proposed extension to rank-r decomposition is very neat. However, experiments can be further improved by adding more contemporary baselines, and the new larger experiments authors conducted during rebuttal can be added to the paper. I'm recommending rejection based on reviewers' opinions, but I wouldn't mind if the paper gets accepted.\", \"additional_comments_on_reviewer_discussion\": \"Two common concerns are: 1. the experiments conducted seem not sufficient in scale and analyses. This has been addressed by authors during rebuttal. 2. Baselines are lacking (such as EAGLE and Medusa), and this seems not addressed yet.\"}" ] }
0DZEs8NpUH
Personality Alignment of Large Language Models
[ "Minjun Zhu", "Yixuan Weng", "Linyi Yang", "Yue Zhang" ]
Aligning large language models (LLMs) typically aim to reflect general human values and behaviors, but they often fail to capture the unique characteristics and preferences of individual users. To address this gap, we introduce the concept of Personality Alignment. This approach tailors LLMs' responses and decisions to match the specific preferences of individual users or closely related groups. Inspired by psychometrics, we created the Personality Alignment with Personality Inventories (PAPI) dataset, which includes data from over 320,000 real subjects across multiple personality assessments - including both the Big Five Personality Factors and Dark Triad traits. This comprehensive dataset enables quantitative evaluation of LLMs' alignment capabilities across both positive and potentially problematic personality dimensions. Recognizing the challenges of personality alignments—such as limited personal data, diverse preferences, and scalability requirements—we developed an activation intervention optimization method. This method enhances LLMs' ability to efficiently align with individual behavioral preferences using minimal data and computational resources. Remarkably, our method, PAS, achieves superior performance while requiring only 1/5 of the optimization time compared to DPO, offering practical value for personality alignment. Our work paves the way for future AI systems to make decisions and reason in truly personality ways, enhancing the relevance and meaning of AI interactions for each user and advancing human-centered artificial intelligence. The dataset and code are released at https://github.com/zhu-minjun/PAlign.
[ "Personality Alignment", "Large language models", "behavioral preferences of LM" ]
Accept (Poster)
https://openreview.net/pdf?id=0DZEs8NpUH
https://openreview.net/forum?id=0DZEs8NpUH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xCcCy5VT00", "uv9Be2YDHf", "n7Mlm58qa9", "m1SHSUS3Oy", "legu4ED7CQ", "eGuStk7UkG", "dRCNSdk65C", "bYpuzIyYn9", "bTgdgiZtfd", "bEOZyCrPCR", "XiAdiXJCxN", "UoRUU86feH", "SzHpQBdWNP", "R9yRm56JGj", "PIM5iIt00X", "NKPkZOp6rR", "Kg4qEt8Wjd", "HNHk28BnxC", "FnV4BQP4gg", "Fdhl6rwTmp", "F0XdtT1N4d", "Eg7cC1KBLK", "ECYaNve2Zl", "AOnAkPQpJq", "6UR73Wrhv3", "62GwqNXz2r", "4nVhQC1uDl", "3UrvjmAPEF" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733166473995, 1732125311496, 1737523398902, 1732867939837, 1732125247041, 1732867144831, 1732125237309, 1732125916234, 1732125587236, 1732125520987, 1732125337234, 1730687483135, 1733224280631, 1732125502906, 1732125861823, 1734314544839, 1732125950514, 1732868725323, 1732125533835, 1732125368349, 1732125415878, 1732125131146, 1732867055668, 1730000679458, 1730661011507, 1732125875423, 1733167245961, 1732125833784 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission495/Reviewer_3xm2" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Reviewer_4LUh" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Area_Chair_wero" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Reviewer_nZZi" ], [ "ICLR.cc/2025/Conference/Submission495/Reviewer_3xm2" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ], [ "ICLR.cc/2025/Conference/Submission495/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer 4LUh:\\n\\nWe would like to express our sincere gratitude for your time and effort in reviewing our manuscript and providing valuable feedback. As the deadline for author-reviewer discussions has been extended, we apologize for contacting you again to ensure our responses adequately address your concerns.\\n\\nA few days ago, we submitted detailed responses to your previous comments, and we sincerely hope these responses effectively address the issues you raised. For example, regarding diversity and data source concerns, we expanded our analysis scope during the discussion phase (including analyses across different ages, genders, and countries, comparing English-speaking vs non-English-speaking countries) and supplemented the Dark Triad inventory analysis and discussion beyond PAPI. Additionally, we revised the fine-tuning baseline comparisons in the appendix and expanded the details of manual evaluation.\\n\\nWe believe these revisions have further strengthened the paper! Please feel free to contact us if you need any clarification or have other questions. We are happy to continue the discussion.\\n\\nThank you again, and we look forward to your further feedback.\\n\\nBest Regards,\\n\\nAuthors of \\\"Personality Alignment of Large Language Models\\\"\"}", "{\"title\": \"Response 3\", \"comment\": \"> While this is a standard approach in personality research, self-reports can be subject to biases such as social desirability and lack of self-insight. Incorporating additional data sources, such as behavioral measures or peer ratings, could be more useful.\\n\\nWe appreciate this insightful suggestion. **While we completely agree that behavioral measures and peer ratings would provide invaluable complementary perspectives, these data sources often present significant accessibility challenges.** Many behavioral datasets and peer evaluation systems are rightfully protected due to privacy regulations, ethical considerations, and institutional policies. This protection of sensitive personal data, while absolutely necessary, creates practical barriers for large-scale personality research. Nevertheless, your suggestion has inspired us to think creatively about how to address these limitations within ethical and practical constraints, and the following actions are taken.\\n\\nRather than relying solely on traditional Big Five self-reports, we have incorporated the **Dark Triad inventory** (measuring Machiavellianism, Narcissism, and Psychopathy) alongside our Big Five assessments. This dual-perspective approach provides a more complete picture of personality by capturing both socially desirable and less desirable traits. The Dark Triad measures are especially valuable because they tend to be less influenced by social desirability bias [3,4], offering a more candid window into aspects of personality that might otherwise be underreported. **Our experimental results (Table 4) demonstrate that PAS effectively aligns with both sets of traits, achieving strong performance across this broader spectrum of personality dimensions.** This balanced approach helps mitigate some of the limitations inherent in single-perspective assessment while remaining within ethical and practical boundaries. **It also further demonstrates the broad potential of our PAS approach to apply directly to more personality dimensions and combine with more personality research to achieve ai assistants that meet more personalized preferences**\\n\\n**Table 4: Dark Triad** \\n\\n###### GPT-4o Results\\n\\n| Method | Machiavellianism | Narcissism | Psychopathy |\\n| -------- | ---------------- | ---------- | ----------- |\\n| Few-Shot | **0.80** | **0.76** | **0.83** |\\n| P^2 | 1.17 | 2.04 | 2.00 |\\n\\n###### Llama-3-8B-Instruct Results\\n\\n| Method | Machiavellianism | Narcissism | Psychopathy |\\n| ------------------ | ---------------- | ---------- | ----------- |\\n| PPO | 1.48 | 1.98 | 2.19 |\\n| DPO | 1.41 | 1.99 | 2.12 |\\n| Prompt-MORL | 1.42 | 2.14 | 1.78 |\\n| Personalized-Soups | 1.08 | **1.76** | 1.84 |\\n| Few-Shot | 1.16 | 2.03 | 2.00 |\\n| P^2 | 1.17 | 2.04 | 2.01 |\\n| PAS (Ours) | **0.96** | 1.85 | **1.67** |\\n\\n###### Llama-3-70B-Instruct Results\\n\\n| Method | Machiavellianism | Narcissism | Psychopathy |\\n| ------------------ | ---------------- | ---------- | ----------- |\\n| PPO | 1.52 | 1.96 | 1.90 |\\n| DPO | 1.22 | 2.08 | 1.79 |\\n| Prompt-MORL | 1.15 | 1.99 | 1.76 |\\n| Personalized-Soups | 1.11 | 1.95 | 1.77 |\\n| Few-Shot | 1.04 | 1.89 | 1.80 |\\n| P^2 | 1.02 | 2.11 | 1.93 |\\n| PAS (Ours) | **1.01** | **1.84** | **1.62** |\\n\\nThese results have been integrated in our revised manuscript (See for manuscript Table 1)\\n\\nYour suggestion about incorporating additional data sources remains an exciting direction for future research, and we would be very interested in exploring collaborative opportunities with institutions that have access to such protected behavioral and peer-rating data. We are grateful for this constructive feedback that has helped us better articulate our methodological choices and future directions.\\n\\n[3] G\\u00f3mez-Leal R, Fern\\u00e1ndez-Berrocal P, Guti\\u00e9rrez-Cobo M J, et al. The Dark Tetrad: analysis of profiles and relationship with the Big Five personality factors[J]. Scientific Reports, 2024, 14(1): 4443.\\n\\n[4] Paulhus D L, Williams K M. The dark triad of personality: Narcissism, Machiavellianism, and psychopathy[J]. Journal of research in personality, 2002, 36(6): 556-563.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thanks for the authors' response. My concerns are resolved and I've decided to increase my score by two.\"}", "{\"title\": \"Response 2\", \"comment\": \"And we are especially excited to share **our cross-cultural analysis spanning 18 countries, which reveals PAS's impressive adaptability across diverse cultural contexts.** The method demonstrates consistent excellence across Western nations (France: Extraversion=1.17), Asian countries (China: Openness=1.02), and Nordic regions (Norway: Agreeableness=1.18). These comprehensive results not only validate PAS's effectiveness but also highlight how our large-scale dataset, despite its demographic patterns, enables robust personality alignment across diverse populations.\\n\\n**Table 3: Country**\\n\\n| Country | Method | Agreeableness | Conscientiousness | Extraversion | Neuroticism | Openness |\\n| ----------- | -------- | ------------- | ----------------- | ------------ | ----------- | -------- |\\n| France | Few-shot | 1.54 | 1.55 | 1.37 | 1.33 | 1.40 |\\n| | P\\u00b2 | 1.71 | 1.83 | 1.67 | 1.73 | 1.56 |\\n| | PAS | **1.42** | **1.47** | **1.17** | 1.46 | **1.15** |\\n| Malaysia | Few-shot | 1.58 | 1.54 | 1.44 | 1.41 | 1.59 |\\n| | P\\u00b2 | 2.08 | 1.86 | 1.87 | 1.65 | 1.92 |\\n| | PAS | **1.38** | **1.35** | **1.21** | **1.41** | **1.07** |\\n| China | Few-shot | 1.39 | 1.38 | 1.36 | 1.34 | 1.34 |\\n| | P\\u00b2 | 1.81 | 1.66 | 1.83 | 1.53 | 1.37 |\\n| | PAS | **1.19** | **1.14** | **1.06** | **1.29** | **1.02** |\\n| Norway | Few-shot | 1.43 | 1.39 | 1.39 | 1.41 | 1.52 |\\n| | P\\u00b2 | 1.55 | 1.66 | 1.74 | 1.60 | 1.70 |\\n| | PAS | **1.18** | **1.20** | **1.24** | **1.16** | **1.06** |\\n| Germany | Few-shot | 1.51 | 1.54 | 1.56 | **1.17** | 1.44 |\\n| | P\\u00b2 | 1.58 | 1.55 | 2.11 | 1.47 | 1.57 |\\n| | PAS | **1.38** | **1.30** | **1.32** | 1.21 | **1.36** |\\n| Sweden | Few-shot | 1.41 | 1.52 | 1.52 | 1.30 | 1.48 |\\n| | P\\u00b2 | 1.67 | 1.70 | 1.61 | 1.71 | 1.67 |\\n| | PAS | **1.25** | **1.39** | **1.38** | **1.21** | **1.30** |\\n| Finland | Few-shot | 1.45 | 1.46 | 1.54 | 1.56 | 1.62 |\\n| | P\\u00b2 | 1.76 | 1.66 | 1.72 | 1.80 | 1.82 |\\n| | PAS | **1.33** | **1.27** | **1.30** | **1.44** | **1.41** |\\n| New Zealand | Few-shot | 1.59 | 1.60 | 1.63 | 1.40 | 1.53 |\\n| | P\\u00b2 | 1.91 | 1.98 | 2.07 | 1.62 | 1.67 |\\n| | PAS | **1.21** | **1.23** | **1.31** | **1.33** | **1.21** |\\n| Thailand | Few-shot | 1.47 | 1.45 | 1.32 | 1.46 | 1.43 |\\n| | P\\u00b2 | 1.65 | 1.52 | 1.68 | 1.91 | 1.90 |\\n| | PAS | **1.29** | **1.30** | **1.05** | **1.23** | **1.06** |\\n\\n\\nYour feedback has been invaluable in helping us better articulate and validate these important aspects of our work, and the information above has been integrated into our revised manuscript (See Appendix E.5: Diverse Demographic Groups).\\n\\n\\n---\\n\\n[1] Roberts B W, Mroczek D. Personality trait change in adulthood[J]. Current directions in psychological science, 2008, 17(1): 31-35.\\n\\n\\n\\n[2] Soto, C. J., & Tackett, J. L. (2015). Personality traits in childhood and adolescence: Structure, development, and outcomes. *Current Directions in Psychological Science, 24*(5), 358\\u2013362. https://doi.org/10.1177/0963721415589345\"}", "{\"comment\": \"Dear Reviewer nZZi,\\n\\nAs the discussion period is coming to an end soon, we wanted to check if you have had a chance to review our responses. Please let us know if your questions have been adequately addressed - we are happy to provide any additional clarification needed. Thank you for your time!\\n\\nBest Regards,\\n\\nAuthors of \\\"Personality Alignment of Large Language Models\\\"\"}", "{\"title\": \"Response 1\", \"comment\": \"> While the PAPI dataset is impressively large, it doesnt seem to be diverse. Around 60% of subjects are female and the average age is 25 years. This skew can potentially bias the results and limit generalizability to other populations.\\n\\n\\n\\nThank you for your valuable observation regarding demographic representation. We greatly appreciate this thoughtful feedback, which has encouraged us to conduct more comprehensive analyses of our dataset's diversity.\\n\\nWe acknowledge that while our initial paper presented certain demographic patterns, we may not have fully emphasized the intentional design behind PAPI's large scale (>300K samples). **The dataset's extensive size was specifically chosen to ensure meaningful representation across diverse populations.** While it spans ages 10-100, covering the complete developmental lifecycle, we have substantial samples in formative periods (n=130,094 for ages 10-20, n=107,838 for ages 20-30) which, as studies suggest [1,2], represent critical windows for personality development. This natural age distribution has actually proven beneficial for capturing personality traits during their most dynamic periods of formation.\\n\\nInspired by your insightful comment, **we have significantly expanded our analysis during the rebuttal period (Appendix E.5: Diverse Demographic Groups). Specifically, we categorized the Dev-Set by age, gender, and country. For each group, we selected corresponding subsets and used the same processing pipeline to choose 300 subject samples per group. We tested these 300 samples and will open-source the specific data for each group. The results are particularly encouraging - PAS demonstrates remarkable consistency across demographic segments.** Our age-based analysis reveals strong performance across all groups, with PAS achieving exceptional results in Conscientiousness (scores 1.09-1.30) and Openness (1.03-1.31). \\n\\n**Table 1: Age**\\n\\n| Trait | Method | 10-20 Years | 20-30 Years | 30-40 Years | 40-50 Years | 50-60 Years | 60-70 Years |\\n| ----------------- | -------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |\\n| Agreeableness | Few-shot | 1.81 | 1.81 | 1.70 | 1.84 | 1.78 | 1.81 |\\n| | P\\u00b2 | 2.37 | 1.98 | 2.03 | 1.86 | 1.92 | 2.41 |\\n| | PAS | **1.19** | **1.24** | **1.18** | **1.48** | **1.26** | **1.18** |\\n| Conscientiousness | Few-shot | 1.94 | 2.05 | 2.13 | 2.35 | 2.19 | 2.09 |\\n| | P\\u00b2 | 2.17 | 2.23 | 2.19 | 3.10 | 2.29 | 2.53 |\\n| | PAS | **1.20** | **1.14** | **1.09** | **1.27** | **1.30** | **1.29** |\\n| Extraversion | Few-shot | 1.92 | 1.84 | 1.90 | 2.52 | 1.80 | 2.24 |\\n| | P\\u00b2 | 1.98 | 2.11 | 2.11 | 2.62 | 1.89 | 2.80 |\\n| | PAS | **1.14** | **1.17** | **1.00** | **1.51** | **1.35** | **1.14** |\\n| Openness | Few-shot | 1.61 | 1.61 | 1.52 | 1.84 | 1.58 | 1.59 |\\n| | P\\u00b2 | 1.80 | 1.82 | 1.58 | 2.25 | 1.65 | 1.64 |\\n| | PAS | **1.18** | **1.16** | **1.04** | **1.31** | **1.29** | **1.03** |\\n| Neuroticism | Few-shot | **1.32** | 1.71 | 1.43 | **1.17** | 1.33 | **1.48** |\\n| | P\\u00b2 | 1.68 | 1.93 | 1.45 | 1.45 | 1.52 | 1.94 |\\n| | PAS | 1.44 | **1.32** | **1.33** | 1.45 | **1.27** | 1.53 |\\n\\n**Similarly, gender analysis shows well-balanced performance**, with PAS maintaining superior alignment for both male (Extraversion=1.56) and female (Extraversion=1.32) groups compared to baseline methods that score above 2.0.\\n\\n**Table 2: Gender** \\n\\n| Gender | Method | Agreeableness | Conscientiousness | Extraversion | Openness | Neuroticism |\\n| ------ | -------- | ------------- | ----------------- | ------------ | -------- | ----------- |\\n| Male | Few-shot | 1.66 | 1.84 | 2.17 | 1.52 | 1.32 |\\n| | P\\u00b2 | 2.24 | 2.33 | 2.52 | 2.05 | 1.47 |\\n| | PAS | **1.33** | **1.30** | **1.56** | **1.03** | **1.16** |\\n| Female | Few-shot | 2.14 | 2.28 | 2.09 | 1.81 | 1.58 |\\n| | P\\u00b2 | 2.83 | 2.53 | 2.24 | 2.12 | 2.19 |\\n| | PAS | **1.53** | **1.39** | **1.32** | **1.26** | **1.58** |\"}", "{\"title\": \"Response 4\", \"comment\": \"> Figures 6 and 7 show a significant disparity between LLM-as-a-judge evaluations and human evaluations, which makes me question the consistency and reliability of the judgment process.\\n>\\n> Other details: In line 1304, the authors mention that human annotators come from machine learning and computer science, but ML is a part of CS. Additionally, could the authors disclose the educational background of the annotators (undergraduate or graduate)?\\n\\nThank you for this important question regarding the evaluation methodology discrepancy. \\n\\nDuring the rebuttal period, we conducted additional analyses of our evaluation approaches, which revealed high **inter-annotator agreement among human evaluators** (IAA=0.82 for general evaluation and IAA=0.85 for self-evaluation). These strong agreement scores demonstrate the consistency and reliability of our human evaluation protocol.\\n\\n**The observed disparity between LLM and human evaluations primarily stems from their fundamentally different scoring methodologies**. GPT-4 employs a single-graded scoring system (1-6 points, detailed in Appendix Figure 12) that aims for standardized, independent assessment of each response. However, this approach exhibits a notable central tendency bias, with 72.8% of scores falling between 2-4 (45.1% being 3), leading to more frequent ties in win rate calculations.\\n\\nIn contrast, human evaluators used a pairwise comparison approach (detailed in Appendix Figure 14), where they directly compare two randomly selected responses side-by-side. This methodology naturally encourages more decisive judgments, as evaluators can focus on subtle qualitative differences between responses. **Consequently, many cases that appeared as \\\"ties\\\" in GPT-4's evaluation were clearly differentiated as wins for PAS in human evaluation.** \\n\\nRather than indicating inconsistency, these methodological differences provide complementary insights into model performance. The single-graded approach offers standardized scoring, while pairwise comparisons capture more nuanced preferences. **Most importantly, both methods maintain consistent ranking orders, which reinforces the reliability of our results.** The apparent disparity between LLM and human evaluations actually enriches our understanding by providing multi-dimensional perspectives on model performance.\\n\\nRegarding annotator qualifications, we had three highly qualified evaluators: a Master's student and a Ph.D. student in Computer Science specializing in machine learning, and a Ph.D. graduate in Cognitive Psychology currently working as a university psychological counselor. We have corrected the text in line 1304 to more precisely describe their backgrounds. \\n\\n\\n\\n> Could the authors explain why the PAS method is so effective by looking at the internal hidden layer representations of the model? Intuitively, direct training approaches like DPO/PPO might yield more noticeable improvements, but the results here contradict my expectations.\\n\\n We have added a comprehensive analysis in Appendix E.6 that illuminates their fundamental differences, with Figure 18 providing a clear visual illustration.\\n\\nTraditional methods like PPO/DPO rely on gradient-based updates through backpropagation, which faces inherent limitations in personality alignment tasks. **As shown in the upper panel of Figure 18, these methods modify all attention heads through training iterations, leading to widespread parameter changes that may not effectively capture personality-trait relationships.** The gradient-based updates struggle to precisely identify which attention heads are most crucial for personality expression, often resulting in suboptimal modifications that **can disturb the model's general capabilities**.\\n\\nIn contrast, **as illustrated in the lower panel of Figure 18, PAS employs a two-stage surgical intervention strategy.** First, it precisely identifies key attention heads most responsive to personality traits (visualized by the \\\"Find Heads\\\" step), then selectively adjusts their activation directions (\\\"Change the direction\\\" step) while leaving other components undisturbed. This targeted approach allows PAS to achieve more precise personality alignment without the broad parameter modifications required by traditional methods. This architectural insight helps explain our strong empirical results across both alignment quality and general performance metrics.\"}", "{\"title\": \"Response 4\", \"comment\": \"> The proposed tuning method, although efficient, but requires access to model weights. I doubt its ability to generalize such methods to black-box methods (e.g., GPTs) in the future.\\n\\nWe respectfully acknowledge this limitation while also seeing it as an opportunity to highlight the unique advantages of our approach.\\n\\n**While PAS indeed requires access to model weights, this \\\"white-box\\\" requirement enables precise and efficient personality alignment through targeted activation adjustments. As demonstrated in our experiments, this approach achieves superior performance compared to other white-box methods (PPO, DPO), while requiring only 1/6 of the computational resources. For example, when using Llama-3-8B-Instruct, PAS achieves significantly better alignment scores (Agreeableness=0.94, Conscientiousness=0.91) compared to black-box Few-shot methods (1.28, 1.30) and even outperforms GPT-4o in several dimensions.** \\n\\nFurthermore, as more open-source models become available (like Llama-3, Mistral, Gemma), the white-box requirement becomes less limiting. We believe our method's demonstrated efficiency and effectiveness make it particularly valuable for these increasingly prevalent open-source models, where precise control over model behavior is both possible and desirable.\\n\\nWhile PAS currently requires model access, we envision it as a valuable solution for commercial AI providers like OpenAI, Anthropic, and xAI to offer personalized experiences. Our method has compelling practical advantages:\\n\\n- 1 Minimal Computational Cost: PAS requires only ~20 seconds of forward propagation during initial setup, with negligible overhead during inference. The additional weights per user are merely 20K parameters - insignificant compared to 100B-scale models.\\n\\n- 2 Efficient Personalization: Companies could offer opt-in personalization where users complete a brief personality assessment. The resulting lightweight PAS weights (20K parameters) could be stored and applied efficiently during interactions, enabling truly personalized AI assistants without the computational burden of fine-tuning or prompt engineering.\\n\\n We appreciate your insight as it helps clarify the specific use cases. **We will release our source code and dataset under the MIT License, which will significantly facilitate community collaboration and exchange.**\\n\\n\\n\\n> In any of your experiments, did any of your tuning or evaluation methods trigger safety wall? For example, the model might refuse to answer some questionnaire questions related to consciousness or self-awareness. \\n\\nIndeed\\uff0cin our experiments with **GPT-4o**, approximately about **20%** of queries were met with safety-related refusals to respond, particularly for questions involving self-awareness or consciousness. **We excluded these instances from our analysis to maintain evaluation consistency.** For the **Llama-3** series models, **we have not trigger any safety wall**, due to we implemented **a structured prompting approach to ensure consistent responses while respecting model safety boundaries.** Specifically, we prefixed the Assistant's responses with a controlled token \\\"Option\\\", followed by the question and response format. For example:\\n\\n```\", \"human\": \"Do you trust others easily?\", \"assistant\": \"Option:\\n```\\n\\nWe believe these implementation details and clarifications help demonstrate our careful consideration of both safety boundaries and evaluation consistency. Appendix C.1 has been revised to reflect this. \\n\\n\\n---\\n\\nWe hope this additional information addresses your concerns about model behavior and safety filters in our experiments. We remain committed to responsible AI development while pursuing effective personality alignment methods!\"}", "{\"title\": \"Response 2\", \"comment\": \"**Table 1: Age**\\n\\n| Trait | Method | 10-20 Years | 20-30 Years | 30-40 Years | 40-50 Years | 50-60 Years | 60-70 Years |\\n| ----------------- | -------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |\\n| Agreeableness | Few-shot | 1.81 | 1.81 | 1.70 | 1.84 | 1.78 | 1.81 |\\n| | P\\u00b2 | 2.37 | 1.98 | 2.03 | 1.86 | 1.92 | 2.41 |\\n| | PAS | **1.19** | **1.24** | **1.18** | **1.48** | **1.26** | **1.18** |\\n| Conscientiousness | Few-shot | 1.94 | 2.05 | 2.13 | 2.35 | 2.19 | 2.09 |\\n| | P\\u00b2 | 2.17 | 2.23 | 2.19 | 3.10 | 2.29 | 2.53 |\\n| | PAS | **1.20** | **1.14** | **1.09** | **1.27** | **1.30** | **1.29** |\\n| Extraversion | Few-shot | 1.92 | 1.84 | 1.90 | 2.52 | 1.80 | 2.24 |\\n| | P\\u00b2 | 1.98 | 2.11 | 2.11 | 2.62 | 1.89 | 2.80 |\\n| | PAS | **1.14** | **1.17** | **1.00** | **1.51** | **1.35** | **1.14** |\\n| Openness | Few-shot | 1.61 | 1.61 | 1.52 | 1.84 | 1.58 | 1.59 |\\n| | P\\u00b2 | 1.80 | 1.82 | 1.58 | 2.25 | 1.65 | 1.64 |\\n| | PAS | **1.18** | **1.16** | **1.04** | **1.31** | **1.29** | **1.03** |\\n| Neuroticism | Few-shot | **1.32** | 1.71 | 1.43 | **1.17** | 1.33 | **1.48** |\\n| | P\\u00b2 | 1.68 | 1.93 | 1.45 | 1.45 | 1.52 | 1.94 |\\n| | PAS | 1.44 | **1.32** | **1.33** | 1.45 | **1.27** | 1.53 |\\n\\n\\n\\n**Table 2: Gender** \\n\\n| Gender | Method | Agreeableness | Conscientiousness | Extraversion | Openness | Neuroticism |\\n| ------ | -------- | ------------- | ----------------- | ------------ | -------- | ----------- |\\n| Male | Few-shot | 1.66 | 1.84 | 2.17 | 1.52 | 1.32 |\\n| | P\\u00b2 | 2.24 | 2.33 | 2.52 | 2.05 | 1.47 |\\n| | PAS | **1.33** | **1.30** | **1.56** | **1.03** | **1.16** |\\n| Female | Few-shot | 2.14 | 2.28 | 2.09 | 1.81 | 1.58 |\\n| | P\\u00b2 | 2.83 | 2.53 | 2.24 | 2.12 | 2.19 |\\n| | PAS | **1.53** | **1.39** | **1.32** | **1.26** | **1.58** |\"}", "{\"title\": \"Response 4\", \"comment\": \"> The evaluation of PAS focuses primarily on high-level alignment with the Big Five traits. However, personality is a complex, multifaceted construct, and individuals can vary in their expression of specific facets within each trait.\\n\\nIndeed, the Big Five traits are inherently multifaceted - for example, **Extraversion comprises six distinct facets**: Warmth (level of interpersonal intimacy), Gregariousness (preference for others' company), Assertiveness (social dominance and leadership), Activity (pace of living), Excitement-Seeking (need for stimulation), and Positive Emotions (tendency to experience joy). This multifaceted nature of personality is precisely why PAS was designed to work at a granular level. **Rather than treating personality as simple categorical variables, PAS identifies fine-grained personality-relevant features during alignment and enables continuous-valued adjustments during inference.** **This design allows precise control across trait dimensions and supports arbitrary combinations of personality characteristics.** The effectiveness of this nuanced approach is further validated by our results on the Dark Triad inventory (Table 4), which provides a complementary perspective on personality alignment from a different psychological framework. We appreciate your comment helping us highlight how PAS addresses the intricate nature of personality through its flexible, continuous-valued approach. We have revised Section 3 and Section 5.\\n\\n\\n\\n> The PAPI dataset uses a multiple-choice format for collecting personality data. While this allows for structured and efficient data collection, it may limit the richness and naturalness of the responses.\\n\\nThank you for this thoughtful observation about data collection methodology. Your insight has helped us better articulate the careful reasoning behind our experimental design. We specifically chose the multiple-choice format for PAPI as it aligns with established best practices in personality assessment research. **This standardized approach has been extensively validated in psychological studies and enables reliable, large-scale personality measurement while controlling for response variability.** The IPIP-NEO questionnaires we employed are widely recognized in the field for their robust psychometric properties. Most importantly, this structured format enabled us to **collect and validate an large scale of personality data (>300K samples)**, which would have been extremely challenging with open-ended formats.\\n\\nNevertheless, we fully appreciate your concern about response richness and naturalness. This is precisely why we conducted comprehensive **evaluations of open-ended generation** capabilities, thoroughly documented in Section 5.3 and Appendix E: \\\"Open-ended Generation Performance\\\". **Our results demonstrate that models aligned using PAS can successfully generalize from multiple-choice training to produce natural, contextually appropriate open-ended responses.** When evaluated by both GPT-4 and human judges, PAS-aligned models consistently outperformed baselines in generating personality-consistent free-form text (winning rates 41%-45% against various baselines). The human evaluation results are particularly encouraging, showing that the personality traits learned through structured data successfully transfer to natural language generation. We have modified Section 5.3 to highlight this important aspect of our experimental validation.\"}", "{\"summary\": \"This paper proposes the concept of Personality Alignment for tailoring LLMs to match the preferences and behaviors of individual users or groups. The authors created a large-scale dataset called PAPI with data on behavioral preferences from over 300,000 real subjects across the Big Five personality dimensions. They also propose the Personality Activation Search (PAS) method for efficiently aligning LLMs with individual preferences during inference. PAS identifies key activation vectors corresponding to personality traits and optimally shifts activations in those directions. The authors show that PAS achieves strong performance in capturing individual preferences with high compute efficiency compared to baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"A key strength of the work is the PAPI dataset, with over 300,000 real-world subjects providing detailed responses to the IPIP-NEO-120 and IPIP-NEO-300 questionnaires. The scale is impressive.\\n\\nThe Personality Activation Search (PAS) method is interesting. By identifying key activation vectors that correspond to personality traits and optimally shifting activations in those directions, this approach can more effectively do personality alignment during inference. \\n\\n The authors also conduct a comptehensive evaluation of PAS, comparing it against prompting-based (Few-Shot, P2) and RL-based (PPO, DPO) baselines on the PAPI dataset.\", \"weaknesses\": \"While the PAPI dataset is impressively large, it doesnt seem to be diverse. Around 60% of subjects are female and the average age is 25 years. This skew can potentially bias the results and limit generalizability to other populations.\\nAlso, PAPI dataset relies on self-report data from personality questionnaires. While this is a standard approach in personality research, self-reports can be subject to biases such as social desirability and lack of self-insight. Incorporating additional data sources, such as behavioral measures or peer ratings, could be more useful.\\n\\nThe evaluation of PAS focuses primarily on high-level alignment with the Big Five traits. However, personality is a complex, multifaceted construct, and individuals can vary in their expression of specific facets within each trait. \\n\\nThe PAPI dataset uses a multiple-choice format for collecting personality data. While this allows for structured and efficient data collection, it may limit the richness and naturalness of the responses. \\nThe paper also compares PAS to prompting and RL baselines but does not include a comparison to fine-tuning the entire language model. This is an important consideration as well.S\", \"questions\": \"Some additional questions remian:\\n\\nCan the authors provide more details on the human evaluation process, such as annotator screening, training, and inter-annotator agreement metrics? This would help validate the human evaluation results. The paper is light on this.\\nHow do you expect the methods to perform on multilingual models and non-English datasets? \\nThe discussion on negative societal impacts of AI hyper-personalization (e.g. filter bubbles, opinion polarization) and is light. Authors should exand on this more. \\nFinally, exploring beyond multiple choice for collecting preference data, such as open-ended text can be interesting and more useful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer 4LUh and nZZi:\\n\\n\\nGiven that the author-reviewer discussion period will **end in an hour**, we are eager to ensure that all your suggestions and comments have been thoroughly addressed!\\n\\nOnce again, thank you for your thoughtful review process, and we look forward to your final comments!\\n\\nBest Regards,\\n\\nAuthors of \\\"Personality Alignment of Large Language Models\\\"\"}", "{\"title\": \"Response 1\", \"comment\": \"We are deeply grateful for your insightful and encouraging review. Your recognition of our work's contribution to personality-based LLM alignment, validates our research direction. Your appreciation of our method's efficiency and practical applications is especially valuable. Your detailed feedback not only acknowledges our paper's strengths but also provides valuable perspectives for the field's development!\\n\\n\\n\\n> - Presentation: The authors claim that they collected 307k human samples to craft the PAPI dataset and the use of the IPIP-NEO-120/IPIP-NEO-300 for evaluations as part of their contributions. However, the IPIP-NEO series inventories were frequently used in prior works (e.g., Jiang et al., 2024 as mentioned in the paper), and the 307k responses are also publicly available.\\n\\n\\n\\nWe sincerely appreciate your attention to detail regarding the proper attribution of prior work.\\n\\nWe fully acknowledge that the IPIP-NEO series inventories are well-established assessment tools that have been utilized in previous research. **We have revised our manuscript to more clearly articulate that our contribution lies not in the creation of these inventories or the initial collection of responses, but rather in how we've uniquely integrated and applied these resources for personality alignment.**\\n\\nMost notably, we've further enhanced the dataset by **incorporating 18,192 samples from the Dark Triad inventory in rebuttal stage**, which adds an important complementary dimension to personality assessment. **By combining the Big Five traits from IPIP-NEO with the Dark Triad measures (Machiavellianism, Narcissism, and Psychopathy), we've created a more complete framework for evaluating personality alignment across both socially desirable and challenging personality aspects.** What makes our approach distinctive is the comprehensive integration of both positive and negative personality dimensions. We have revised Section 1 and Section 3 to better reflect this nuanced contribution and to properly acknowledge the foundational work that made our research possible. \\n\\n\\n\\n> - The implications of the 307k PAPI dataset are not that clear. For example, in the experiments authors performed, it seems only the overall average personality tendencies and specific participants' responses are used. So what's the actual advantages of using a such large dataset?\\n\\nThe primary strength of our 307K-sample Dev-Set lies in its remarkable **demographic diversity - spanning multiple age groups (from teenagers to seniors), various nationalities (including France, China, Norway, Thailand, etc.), and different cultural backgrounds.** This extensive coverage enables us to study personality alignment across globally representative populations. **During the rebuttal period, we have leveraged this diversity to conduct comprehensive experiments across different demographic groups, as detailed in the following Table 1.2.3 and Appendix E.5**. For example, our age-based analysis shows PAS's effectiveness across six age brackets (10-70 years), while our cross-cultural analysis demonstrates robust performance across Western, Asian, and Nordic nations. We hope such a large-scale, diverse dataset can be helpful for subsequent extensive research and detailed analyses.\\n\\nWe have revised Section 3 to better articulate these advantages and to showcase how the extensive Dev-Set supports rigorous evaluation of personality alignment across different populations. Our supplementary experiments validate that PAS maintains strong performance across various demographic segments (e.g., achieving strong alignment scores in France: Extraversion=1.17, China: Openness=1.02, Norway: Agreeableness=1.18), demonstrating the practical value of having such a comprehensive dataset. We sincerely appreciate your question as it has helped us better highlight the crucial role of our large-scale Dev-Set in enabling thorough cross-cultural and demographic analyses. These tables have been added to Appendix E.5.\"}", "{\"title\": \"Response 2\", \"comment\": \"> Q2\\uff1aWhile the proposed method performs well on the dataset, it is actually quite similar to approaches used in many previous studies [1][2]. As a result, the novelty of the method is somewhat lacking, and the authors should be careful not to overstate their contribution.\\n\\n\\n\\n**While we acknowledge the foundational insights drawn from previous activation-based methods like [1,2], our work introduces crucial novel elements specifically designed for personality alignment challenges. Unlike prior work what primarily focuses on steering activations in a single direction (e.g., maximizing safety), our approach addresses the more nuanced challenge of achieving precise, balanced alignment across multiple personality dimensions simultaneously.** The key innovation lies in our contrastive layer selection and optimal intervention strength optimization, which ensures appropriate alignment levels - neither too strong nor too weak - across different personality traits.\\n\\n**A fundamental difference from previous safety-focused approaches is that PAS must carefully calibrate activation steering to maintain authentic personality expression. This is particularly evident in our experimental results where PAS achieves superior performance on both alignment accuracy and general capability preservation.** Based on your feedback, we have revised our introduction and related work sections to provide a more thorough comparative discussion of these methodological relationships while better articulating our unique contributions to personality-aware model alignment.\\n\\nWe believe this work opens up new directions for developing more sophisticated activation-based methods that can handle the delicate balance required for personality-aligned AI systems. \\n\\n\\n\\n> Essentially, the proposed method is a form of personalized alignment, so the authors should compare it against more baselines [3].\\n\\nThank you for your constructive suggestion. **We have expanded baseline comparisons to include two recent personalized alignment approaches[1]**. For Prompt-MORL, we incorporate personality trait descriptions into prompts through a template: \\\"You are an AI assistant with {openness_score} openness, {conscientiousness_score} conscientiousness...\\\" etc., and implement a multi-objective reward model to compute scores across different personality dimensions. For Personalized-Soups, we train separate models specialized for each Big Five trait and Dark Triad dimension, then merge their parameters during inference using personality scores as merging weights. Both implementations use the same PAPI dataset for training.\\n\\n\\n\\n###### Llama-3-8B-Instruct\\n\\n| Method | Alignment Mode | Agreeableness \\u2193 | Conscientiousness \\u2193 | Extraversion \\u2193 | Neuroticism \\u2193 | Openness \\u2193 | Machiavellianism \\u2193 | Narcissism \\u2193 | Psychopathy \\u2193 | Score |\\n| -------------------- | ------------------------ | --------------- | ------------------- | -------------- | ------------- | ---------- | ------------------ | ------------ | ------------- | -------- |\\n| PPO | White-Box (Alignment) | 1.63 | 1.51 | 1.45 | 1.42 | 1.61 | 1.48 | 1.98 | 2.19 | 13.27 |\\n| DPO | White-Box (Alignment) | 1.54 | 1.42 | 1.54 | 1.74 | 1.21 | 1.41 | 1.99 | 2.12 | 12.97 |\\n| *Prompt-MORL* | White-Box (Alignment) | 1.18 | 0.93 | 1.01 | 1.23 | 1.00 | 1.42 | 2.14 | 1.78 | 10.88 |\\n| *Personalized-Soups* | White-Box (Alignment) | 1.06 | 0.91 | 0.93 | 1.28 | 0.80 | 1.08 | **1.76** | 1.84 | 9.66 |\\n| Few-Shot | Black-Box (Prompt-Based) | 1.28 | 1.30 | 1.40 | 1.09 | 0.89 | 1.16 | 2.03 | 2.00 | 11.15 |\\n| P\\u00b2 | Black-Box (Prompt-Based) | 1.39 | 1.33 | 1.41 | 1.22 | 1.68 | 1.17 | 2.04 | 2.01 | 12.25 |\\n| **PAS (Ours)** | White-Box (Alignment) | **0.94** | **0.91** | **0.86** | **0.98** | **0.72** | **0.96** | 1.85 | **1.67** | **8.89** |\"}", "{\"metareview\": \"We recommend the paper to be accepted for Poster.\\n\\nThe paper can be of interest to the wide community at ICLR working on LLM and it introduces a relatively novel methodology that seems to be more efficient than baseline methods. \\n\\nBelow a more detailed description of the paper.\\n\\nThe paper introduces the concept of Personality Alignment, aimed at customizing large language models (LLMs) to align with the preferences and behaviors of individual users or groups. To support this approach, the authors developed a large-scale dataset called PAPI, which contains behavioral preference data from over 300,000 real participants across the Big Five personality dimensions. They also propose a novel method called Personality Activation Search (PAS) to efficiently align LLMs with user preferences during inference. PAS identifies key activation vectors corresponding to personality traits and optimally adjusts activations along these dimensions. \\n\\nThe strengths (S#) of the paper are as follows: \\n \\n- (S1)\\tThe results demonstrate that PAS outperforms baseline methods in capturing individual preferences while maintaining high computational efficiency\\n- (S2)\\tThe paper provides a new dataset generation pipeline that can be used and enriched for the task taken into account.\\n- (S3)\\tThe experimental analysis is comprehensive and provides \\n- (S4)\\tThe paper is well written, and easy to follow also for a non-specialized audience\\n\\nThe key weaknesses (W#) identified and that remains are as follows: \\n\\n- (W1)\\tUse of model weights may hinder the applicability of the method to few LLMs\\n- (W2)\\tPossible bias in the annotators used (only from CS community)\\n- (W3)\\tSignificant disparity between LLM-as-a-judge evaluations and human evaluations should be further discussed\\n\\nMany of the points raised by the reviewers were addressed by the authors.\", \"additional_comments_on_reviewer_discussion\": \"The authors have been proactive in addressing the comments raised by the reviewers\\nReviewer 3xm2 was engaged in reading the authors response and increased the score accordingly, while being confident in the decision. \\nReviewers 4LUh and nZZi did not follow up on their reviews after extensive responses from the authors. \\nAs per metareview above, we believe that many of the points raised have been addressed, therefore we lean toward acceptance for poster.\\n\\nNo ethics review raised by the reviewers, and we agree with them.\"}", "{\"title\": \"Response 5\", \"comment\": \"> In line 429, the authors mention \\\"Why Did Scaling Laws Fail?\\\" but don't seem to fully answer this question. I would like the authors to explain this from the perspective of the domain's specificity. Is it because larger models learn more general alignment, which makes it harder for them to excel at aligning with a specific personality?\\n\\nThank you for this insightful question. Let me provide a comprehensive explanation incorporating our recent revisions. \\nOur analysis reveals that scaling laws, while powerful for general capabilities, demonstrate limitations in domain-specific tasks like personality alignment for several key reasons: \\n\\nLarger models like Llama-3-70B-Instruct are trained to maintain broad, general-purpose capabilities across diverse domains. This generalist optimization actually creates a trade-off: while these models gain comprehensive knowledge, they may sacrifice precision in specific domains like personality alignment. The models' tendency to draw from their broad knowledge base can interfere with maintaining consistent personality traits, as they attempt to balance multiple competing objectives. For instance, Our data shows that when responding to user queries, Llama-3-70B-Instruct frequently defaults to broadly acceptable but personality-neutral responses, diluting the distinct personality traits we aim to capture.\\n\\nWe have revised line 429 to better articulate this insight. The success of PAS in outperforming larger models with fewer parameters demonstrates that in personality alignment, precision and specificity in activation control are more crucial than model scale. PAS achieves this by focusing specifically on personality-relevant activation patterns rather than general knowledge. This targeted approach proves more effective in addressing individual user preferences compared to relying solely on increased model size - explaining why even models with fewer parameters can achieve superior personality alignment when using PAS.\\n\\n\\n\\n---\\n\\nWe sincerely appreciate your insightful feedback, which has helped us significantly enhance our work. We have substantially expanded our analysis to include additional baselines, Dark Triad evaluations (18,192 samples), detailed inter-annotator agreement metrics, and in-depth discussions of scaling limitations in personality alignment. **We would be deeply grateful if you would kindly reconsider your assessment in light of these improvements. Thank you again for your thoughtful and constructive comments.**\"}", "{\"comment\": \"We are deeply grateful for your decision to increase the score! Your support and encouragement mean a tremendous amount to our team, and we sincerely appreciate your recognition of our work!\\n\\n\\nBest Regards,\\n\\nAuthors of \\\"Personality Alignment of Large Language Models\\\"\"}", "{\"title\": \"Response 3\", \"comment\": \"**Table 3: Country**\\n\\n| Country | Method | Agreeableness | Conscientiousness | Extraversion | Neuroticism | Openness |\\n| ----------- | -------- | ------------- | ----------------- | ------------ | ----------- | -------- |\\n| France | Few-shot | 1.54 | 1.55 | 1.37 | 1.33 | 1.40 |\\n| | P\\u00b2 | 1.71 | 1.83 | 1.67 | 1.73 | 1.56 |\\n| | PAS | **1.42** | **1.47** | **1.17** | 1.46 | **1.15** |\\n| Malaysia | Few-shot | 1.58 | 1.54 | 1.44 | 1.41 | 1.59 |\\n| | P\\u00b2 | 2.08 | 1.86 | 1.87 | 1.65 | 1.92 |\\n| | PAS | **1.38** | **1.35** | **1.21** | **1.41** | **1.07** |\\n| China | Few-shot | 1.39 | 1.38 | 1.36 | 1.34 | 1.34 |\\n| | P\\u00b2 | 1.81 | 1.66 | 1.83 | 1.53 | 1.37 |\\n| | PAS | **1.19** | **1.14** | **1.06** | **1.29** | **1.02** |\\n| Norway | Few-shot | 1.43 | 1.39 | 1.39 | 1.41 | 1.52 |\\n| | P\\u00b2 | 1.55 | 1.66 | 1.74 | 1.60 | 1.70 |\\n| | PAS | **1.18** | **1.20** | **1.24** | **1.16** | **1.06** |\\n| Germany | Few-shot | 1.51 | 1.54 | 1.56 | **1.17** | 1.44 |\\n| | P\\u00b2 | 1.58 | 1.55 | 2.11 | 1.47 | 1.57 |\\n| | PAS | **1.38** | **1.30** | **1.32** | 1.21 | **1.36** |\\n| Sweden | Few-shot | 1.41 | 1.52 | 1.52 | 1.30 | 1.48 |\\n| | P\\u00b2 | 1.67 | 1.70 | 1.61 | 1.71 | 1.67 |\\n| | PAS | **1.25** | **1.39** | **1.38** | **1.21** | **1.30** |\\n| Finland | Few-shot | 1.45 | 1.46 | 1.54 | 1.56 | 1.62 |\\n| | P\\u00b2 | 1.76 | 1.66 | 1.72 | 1.80 | 1.82 |\\n| | PAS | **1.33** | **1.27** | **1.30** | **1.44** | **1.41** |\\n| New Zealand | Few-shot | 1.59 | 1.60 | 1.63 | 1.40 | 1.53 |\\n| | P\\u00b2 | 1.91 | 1.98 | 2.07 | 1.62 | 1.67 |\\n| | PAS | **1.21** | **1.23** | **1.31** | **1.33** | **1.21** |\\n| Thailand | Few-shot | 1.47 | 1.45 | 1.32 | 1.46 | 1.43 |\\n| | P\\u00b2 | 1.65 | 1.52 | 1.68 | 1.91 | 1.90 |\\n| | PAS | **1.29** | **1.30** | **1.05** | **1.23** | **1.06* |\"}", "{\"title\": \"Response 5\", \"comment\": \"> The paper also compares PAS to prompting and RL baselines but does not include a comparison to fine-tuning the entire language model.\\n\\n\\n\\nThank you for this valuable suggestion about baseline comparisons. We appreciate the opportunity to clarify our comprehensive evaluation strategy.**We have actually conducted extensive comparisons with full model fine-tuning and other parameter-efficient methods in Appendix E.4. The results are particularly illuminating - as shown in Table 5, PAS demonstrates superior performance across all personality dimensions compared to full fine-tuning and other methods.** Specifically, PAS achieves a composite score of 4.41, outperforming full fine-tuning (4.89), LoRA (4.94), Q-LoRA (5.04), and prompt-tuning (5.15). The performance advantages are consistent across individual traits, with PAS achieving the best scores in Agreeableness (0.94), Conscientiousness (0.91), Extraversion (0.86), and Openness (0.72). \\n\\n\\n\\n**Table 5: Performance comparison of different parameter-efficient tuning methods for Llama-3-Instruct 8B**\\n\\n| Method | Agreeableness \\u2193 | Conscientiousness \\u2193 | Extraversion \\u2193 | Neuroticism \\u2193 | Openness \\u2193 | Composite Score |\\n| ---------------- | --------------- | ------------------- | -------------- | ------------- | ---------- | --------------- |\\n| Full Fine-tuning | 1.21 | 0.99 | 1.03 | 0.88 | 0.78 | 4.89 |\\n| LoRA | 1.16 | 1.05 | 0.97 | 0.93 | 0.83 | 4.94 |\\n| Q-LoRA | 1.08 | 1.12 | 1.09 | 0.85 | 0.90 | 5.04 |\\n| Prompt-tuning | 1.25 | 1.07 | 1.01 | 0.96 | 0.86 | 5.15 |\\n| **PAS (Ours)** | **0.94** | **0.91** | **0.86** | **0.98** | **0.72** | **4.41** |\\n\\n**Table 6: Open-ended generation performance comparison for Llama-3-Instruct 8B**\\n\\n| Method | PAS Wins | Ties | PAS Loses |\\n| ---------------- | -------- | ---- | --------- |\\n| Full Fine-tuning | **41%** | 30% | 29% |\\n| LoRA | **43%** | 33% | 24% |\\n| Q-LoRA | **42%** | 35% | 23% |\\n| Prompt-tuning | **45%** | 31% | 24% |\\n\\n\\n\\nEven more encouraging are the results from our open-ended generation evaluation (Table 6). **When compared head-to-head in generating natural language responses, PAS consistently demonstrates superior performance against all parameter modification approaches. The win rates are particularly noteworthy: 41% against full fine-tuning, 43% against LoRA, 42% against Q-LoRA, and 45% against prompt-tuning, with relatively low loss rates (23-29%).** These results suggest that PAS's activation-based approach not only matches but exceeds the performance of traditional parameter modification methods, while offering significant advantages in terms of computational efficiency and deployment flexibility. Your comments have helped us better emphasize these important comparative analyses. We have revised Appendix E.4.1.\\n\\n> Can the authors provide more details on the human evaluation process, such as annotator screening, training, and inter-annotator agreement metrics? This would help validate the human evaluation results. \\n\\n\\n\\nThank you for this important question. We have revised Appendix C.5 to provide comprehensive details about our evaluation process, which was designed to ensure reliability and reproducibility.\\n\\nOur human evaluation involved three qualified evaluators, all graduate or doctoral students with backgrounds in machine learning, computer science, and cognitive psychology. To establish a robust foundation for evaluation, **each evaluator first completed the IPIP-NEO-120 questionnaire, providing their own personality baseline.** Before beginning the human evaluation, evaluators underwent a standardization process where they reviewed 30 example cases with GPT-4o ratings, ensuring consistent understanding of the evaluation criteria and scoring standards.\\n\\nThe evaluation process itself was systematically structured and controlled. We developed a Python-based annotation tool that presented anonymized samples in randomized order to prevent ordering bias (Figure 14 in the Appendix). All responses were automatically saved in JSON format for subsequent analysis. **The Inter-Annotator Agreement (IAA) score is 0.82 for human evaluations and 0.85 for human self-evaluators, which validates the reliability and stability of our assessment methodology.** This systematic approach helped ensure consistency and minimize potential biases in the evaluation process. The complete details of our human evaluation protocol, including evaluator selection criteria, training process, and evaluation tools, are documented in Appendix C.5: \\\"Human Evaluation Experiment Details.\\\"\"}", "{\"title\": \"Response 6\", \"comment\": \"> How do you expect the methods to perform on multilingual models and non-English datasets?\\n\\nThe PAPI dataset's inclusion of participants from multiple countries provides a strong foundation for evaluating and validating cross-cultural personality alignment. **PAS operates on internal model activations rather than language-specific features.** This architecture-level intervention approach means PAS can theoretically be applied to any transformer-based language model, regardless of the languages it supports. **Our empirical results support this theoretical advantage - as shown in Table 3 and detailed in Appendix E.5**, PAS demonstrates strong performance across diverse linguistic and cultural groups. For instance, we observe consistently strong alignment scores in non-English speaking countries: China (Openness=1.02, Extraversion=1.06), France (Extraversion=1.17, Openness=1.15), and Germany (Conscientiousness=1.30, Agreeableness=1.38). The robust performance across these linguistically diverse populations suggests that PAS effectively captures personality traits independent of language-specific characteristics. \\n\\n> The discussion on negative societal impacts of AI hyper-personalization (e.g. filter bubbles, opinion polarization) and is light.\\n\\nThank you for raising this critical concern about AI personalization's societal impacts. We have significantly expanded our discussion of ethical implications in a **new \\\"Ethics Statement\\\" section, which thoroughly examines potential risks like psychological filter bubbles and echo chambers.** We particularly focus on how personality-aligned AI systems might inadvertently reinforce existing behavioral patterns, especially for users scoring high in Neuroticism or Dark Triad traits. **The section also addresses privacy concerns and proposes concrete mitigation strategies**, including: (1) a dynamic alignment boundary system that monitors and adjusts alignment intensity to prevent extreme behavioral reinforcement, (2) an adaptive content diversity mechanism that strategically introduces alternative viewpoints while maintaining personality alignment, and (3) robust privacy protection frameworks for securing personality data. We appreciate your comment highlighting these important considerations, as it has helped us develop a more comprehensive framework for responsible deployment of personality-aligned AI systems. \\n\\n> Finally, exploring beyond multiple choice for collecting preference data, such as open-ended text can be interesting and more useful.\\n\\n**While open-ended preference data is indeed valuable, we've been particularly mindful of ethical considerations and privacy concerns in collecting large-scale personal narratives. As an exciting alternative approach, we envision leveraging our existing PAPI dataset to generate naturalistic personality descriptions.** For example:\\n\\n```markdown\", \"original_ipip_responses\": [\"\\\"Trust others (Very Accurate)\\\"\", \"\\\"Jump into things without thinking (Moderately Inaccurate)\\\"\", \"\\\"Dislike yourself (Neither Accurate Nor Inaccurate)\\\"\"], \"potential_transformed_description\": \"\\\"This individual shows a strong tendency to trust and believe in others' good intentions. They typically approach decisions with careful consideration rather than impulsive action. When it comes to self-perception, they maintain a balanced view, neither particularly critical nor overly confident in their self-assessment.\\\"\\n```\\n\\n**This transformation approach could potentially enrich our personality alignment framework while respecting privacy boundaries and ethical constraints inherent in collecting direct personal narratives.** We are excited to explore this direction in future work, as it offers a promising path to combine the reliability of structured assessments with the richness of natural language descriptions. \\n\\n---\\n\\nWe hope our comprehensive responses and substantial revisions have adequately addressed all your concerns. Your expert guidance has been invaluable in strengthening this work, and **we would be deeply appreciative if you would kindly reconsider your assessment of our paper in light of these improvements. Thank you again for your time and detailed feedback.**\"}", "{\"title\": \"General Responses and Summary of Revisions\", \"comment\": \"We sincerely thank all reviewers for their careful and constructive feedback, which has helped us significantly improve our work. We are encouraged by the reviewers' recognition of several key strengths:\\n\\n- \\\"A key strength of the work is the PAPI dataset, with over 300,000 real-world...The scale is impressive\\\" (*Review 4LUh*)\\n- \\\"Overall, I like this paper and I think it's a very good attempt in aligning LLMs from a personality perspective.\\\" (*Review 3xm2*) \\n- \\\"The experimental analysis is comprehensive, with particularly detailed descriptions of the experimental setup.\\\" (*Review nZZi*)\\n\\n\\n\\n\\n\\nBased on your constructive comments, we have made substantial improvements to strengthen our paper's contributions and address the limitations. The major enhancements include:\\n\\n1. **Extended Dataset Coverage**: We expanded PAPI with 18,192 Dark Triad questionnaire samples, enabling comprehensive assessment of both positive (Big Five) and negative (Machiavellianism, Narcissism, Psychopathy) personality dimensions. This enhancement provides a more complete framework for personality alignment evaluation (Abstract, Section 1: Introduction, Section 3: Dataset Construction).\\n\\n2. **Demographic Analysis**: We added comprehensive analysis of PAS performance across different age groups, genders, and nationalities, demonstrating robust alignment capabilities across diverse populations. This validates the broad applicability of our approach (Appendix E.5: Diverse Demographic Groups).\\n\\n3. **Evaluation Protocol Details**: We supplemented our evaluation framework with specific details about the human evaluation interface, annotator qualifications, and inter-annotator agreement metrics (IAA=0.82). These additions enhance the reproducibility and reliability of our results (Appendix C.5).\\n\\n4. **Technical Discussion**: We provided additional analysis explaining why scaling laws don't automatically improve personality alignment, highlighting how PAS's targeted intervention achieves better results than larger models through precise control of personality-relevant components (Section 5: Experiments).\\n\\n5. **Ethics Framework**: We supplemented our discussion with specific mitigation strategies for potential risks of personality alignment, including dynamic alignment boundaries and content diversity systems. This provides a practical framework for responsible deployment (Section 7: Ethics Statement).\\n\\n6. **Method Analysis Detail**: We added a detailed comparison of how PAS differs from traditional methods in modifying model representations, particularly highlighting our two-stage approach versus global parameter updates. This clarifies PAS's unique advantages in precise personality alignment (Appendix E.6).\\n\\nWith the generous guidance from our esteemed reviewers, we have been able to substantially enhance our paper's technical depth, experimental rigor, and practical impact. We are deeply grateful for the opportunity to improve our work and would be honored if the reviewers find these comprehensive enhancements worthy of a more favorable evaluation. We believe these additions have helped realize the full potential of our research contribution to the field of personality alignment in AI systems.\"}", "{\"comment\": \"Dear Reviewer 4LUh,\\n\\nAs the discussion period is coming to an end soon, we wanted to check if you have had a chance to review our responses. Please let us know if your questions have been adequately addressed - we are happy to provide any additional clarification needed. Thank you for your time!\\n\\nBest Regards,\\n\\nAuthors of \\\"Personality Alignment of Large Language Models\\\"\"}", "{\"summary\": \"This paper explores the personality alignment of large language models (LLMs). Specifically, it introduces a new dataset and proposes the PAS method for personality alignment based on this dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The strengths of this paper can be summarized in three key points:\", \"It contributes a new dataset (or more accurately, a dataset generation pipeline).\", \"The proposed method is simple yet highly effective.\", \"The experimental analysis is comprehensive, with particularly detailed descriptions of the experimental setup.\"], \"weaknesses\": [\"However, I believe the paper has the following weaknesses:\", \"The scope of the contribution is somewhat limited. I would have liked to see this method applied to a broader range of personalities, rather than being restricted to just the five personalities of the Big Five model. The authors could consider additional datasets, such as the Dark Triad or even the MBTI test (though MBTI remains controversial in psychology). Expanding in this way would enhance the paper\\u2019s overall contribution.\", \"While the proposed method performs well on the dataset, it is actually quite similar to approaches used in many previous studies [1][2]. As a result, the novelty of the method is somewhat lacking, and the authors should be careful not to overstate their contribution.\", \"Essentially, the proposed method is a form of personalized alignment, so the authors should compare it against more baselines [3].\", \"Figures 6 and 7 show a significant disparity between LLM-as-a-judge evaluations and human evaluations, which makes me question the consistency and reliability of the judgment process.\", \"Other details: In line 1304, the authors mention that human annotators come from machine learning and computer science, but ML is a part of CS. Additionally, could the authors disclose the educational background of the annotators (undergraduate or graduate)?\", \"[1] Zheng, Chujie, et al. \\\"On prompt-driven safeguarding for large language models.\\\" *Forty-first International Conference on Machine Learning*. 2024.\", \"[2] Wang, Haoran, and Kai Shu. \\\"Backdoor activation attack: Attack large language models using activation steering for safety-alignment.\\\" *arXiv preprint arXiv:2311.09433* (2023).\", \"[3] https://github.com/liyongqi2002/Awesome-Personalized-Alignment\"], \"questions\": \"Could the authors explain why the PAS method is so effective by looking at the internal hidden layer representations of the model? Intuitively, direct training approaches like DPO/PPO might yield more noticeable improvements, but the results here contradict my expectations.\\n\\nIn line 429, the authors mention \\\"Why Did Scaling Laws Fail?\\\" but don't seem to fully answer this question. I would like the authors to explain this from the perspective of the domain's specificity. Is it because larger models learn more general alignment, which makes it harder for them to excel at aligning with a specific personality?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces the concept of Personality Alignment for LLMs, that is, tailoring of responses to individual user preferences based on personality traits. Using the Big Five personality theory, the authors introduces the PAPI dataset modeling human personality distributions. The authors also propose a LLM tuning method based on disabling certain activation heads -- Personality Activation Search (PAS). Evaluation results demonstrate PAS\\u2019s performance and efficiency compared to traditional alignment techniques including RL- and prompting-based methods.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Overall, I like this paper and I think it's a very good attempt in aligning LLMs from a personality perspective.\", \"The paper is generally well-written and easy to read, illustrations are also made in a good quality.\", \"It's cool to see how personality affects downstream reasoning tasks. It has always been something missing in prior personality-related LLM work. And it's definite a god step here.\", \"The proposed method is efficient, and can provide better results compared to prompting-based methods. It may have wide applications in tailoring persona/personality-specific chatbots to end users.\"], \"weaknesses\": [\"Presentation: The authors claim that they collected 307k human samples to craft the PAPI dataset and the use of the IPIP-NEO-120/IPIP-NEO-300 for evaluations as part of their contributions. However, the IPIP-NEO series inventories were frequently used in prior works (e.g., Jiang et al., 2024 as mentioned in the paper), and the 307k responses are also publicly available.\", \"The implications of the 307k PAPI dataset are not that clear. For example, in the experiments authors performed, it seems only the overall average personality tendencies and specific participants' responses are used. So what's the actual advantages of using a such large dataset?\", \"The proposed tuning method, although efficient, but requires access to model weights. I doubt its ability to generalize such methods to black-box methods (e.g., GPTs) in the future.\"], \"questions\": [\"In any of your experiments, did any of your tuning or evaluation methods trigger safety wall? For example, the model might refuse to answer some questionnaire questions related to consciousness or self-awareness.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 3\", \"comment\": \"###### Llama-3-70B-Instruct\\n\\n| Method | Alignment Mode | Agreeableness \\u2193 | Conscientiousness \\u2193 | Extraversion \\u2193 | Neuroticism \\u2193 | Openness \\u2193 | Machiavellianism \\u2193 | Narcissism \\u2193 | Psychopathy \\u2193 | Score |\\n| -------------------- | ------------------------ | --------------- | ------------------- | -------------- | ------------- | ---------- | ------------------ | ------------ | ------------- | -------- |\\n| PPO | White-Box (Alignment) | 1.56 | 1.59 | 1.43 | 1.40 | 1.56 | 1.52 | 1.96 | 1.90 | 12.92 |\\n| DPO | White-Box (Alignment) | 1.46 | 1.25 | 1.45 | 1.48 | 1.57 | 1.22 | 2.08 | 1.79 | 12.30 |\\n| *Prompt-MORL* | White-Box (Alignment) | 1.10 | 1.11 | 1.02 | 1.30 | 1.24 | 1.15 | 1.99 | 1.76 | 10.67 |\\n| *Personalized-Soups* | White-Box (Alignment) | 0.99 | 0.96 | 1.16 | 1.02 | 1.08 | 1.11 | 1.95 | 1.77 | 10.04 |\\n| Few-Shot | Black-Box (Prompt-Based) | 1.06 | 0.94 | 0.96 | 1.03 | 1.22 | 1.04 | 1.89 | 1.80 | 9.94 |\\n| P\\u00b2 | Black-Box (Prompt-Based) | 1.42 | 1.33 | 1.36 | 1.35 | 1.66 | 1.02 | 2.11 | 1.93 | 12.18 |\\n| **PAS (Ours)** | White-Box (Alignment) | **0.98** | **0.89** | **0.87** | **1.01** | **0.99** | **1.01** | **1.84** | **1.62** | **9.21** |\\n\\n\\n\\n\\n\\n**Our experimental results demonstrate that while these methods achieve strong performance (Personalized-Soups: 9.66 composite score on Llama-3-8B, Prompt-MORL: 10.88), PAS still achieves superior alignment (8.89) with significantly lower computational costs. Notably, Personalized-Soups shows particular strength in Narcissism alignment (1.76 vs 1.85), while PAS demonstrates better performance across most Big Five traits.** These baseline comparisons help validate our method's effectiveness while acknowledging valuable prior contributions. We provide more implementation details in the Appendix C.2.\\n\\n---\\n\\n[1] Jang J, Kim S, Lin B Y, et al. Personalized soups: Personalized large language model alignment via post-hoc parameter merging[J]. arXiv preprint arXiv:2310.11564, 2023.\"}", "{\"comment\": \"Dear Reviewer nZZi:\\n\\nWe deeply appreciate your time and effort in the review process. We apologize for contacting you again to ensure our responses adequately address your concerns!\\n\\nWe have carefully considered your feedback and worked diligently during the discussion phase to address your concerns. For example, we have supplemented the main table with Dark Triad inventory content and introduced 18,192 real Dark Triad samples, making our PAPI dataset an Alignment dataset that evaluates both positive and negative personalities! We have also added Prompt-MORL and Personalized-Soups as new baselines, and our method outperforms these personalized alignment baselines! For all other concerns you raised, we have provided detailed and sufficient responses above.\\n\\nWe have made thorough revisions based on your valuable feedback! This has enhanced the paper's overall contribution! Please contact us if you need any clarification or have other questions. We are happy to continue the discussion.\\n\\nThank you again, and we look forward to your further feedback.\\n\\n\\nBest Regards,\\n\\nAuthors of \\\"Personality Alignment of Large Language Models\\\"\"}", "{\"title\": \"Response 1\", \"comment\": \"> The scope of the contribution is somewhat limited. I would have liked to see this method applied to a broader range of personalities, rather than being restricted to just the five personalities of the Big Five model. The authors could consider additional datasets, such as the Dark Triad or even the MBTI test (though MBTI remains controversial in psychology). Expanding in this way would enhance the paper\\u2019s overall contribution.\\n\\nThank you for this insightful suggestion about expanding the scope of personality assessment. We have follow your advise and taken steps in this direction, **expanding our evaluation framework by incorporating the Dark Triad inventory alongside the Big Five traits.** Specifically, we collected and analyzed 18,192 independent samples measuring Machiavellianism, Narcissism, and Psychopathy through 27 targeted questions. Our results demonstrate PAS's effectiveness across these additional personality dimensions - achieving strong alignment scores for Machiavellianism (0.96), Narcissism (1.85), and Psychopathy (1.67) when using the Llama-3-8B-Instruct model. These scores are particularly noteworthy given the complexity of aligning models with these challenging personality traits. The Dark Triad provides an important complementary perspective to the Big Five, allowing us to evaluate alignment across both socially desirable and potentially problematic personality dimensions.\\n\\n\\n**Table 1: Dark Triad** \\n\\n###### GPT-4o Results\\n\\n| Method | Machiavellianism | Narcissism | Psychopathy |\\n| -------- | ---------------- | ---------- | ----------- |\\n| Few-Shot | **0.80** | **0.76** | **0.83** |\\n| P^2 | 1.17 | 2.04 | 2.00 |\\n\\n###### Llama-3-8B-Instruct Results\\n\\n| Method | Machiavellianism | Narcissism | Psychopathy |\\n| ------------------ | ---------------- | ---------- | ----------- |\\n| PPO | 1.48 | 1.98 | 2.19 |\\n| DPO | 1.41 | 1.99 | 2.12 |\\n| Prompt-MORL | 1.42 | 2.14 | 1.78 |\\n| Personalized-Soups | 1.08 | **1.76** | 1.84 |\\n| Few-Shot | 1.16 | 2.03 | 2.00 |\\n| P^2 | 1.17 | 2.04 | 2.01 |\\n| PAS (Ours) | **0.96** | 1.85 | **1.67** |\\n\\n###### Llama-3-70B-Instruct Results\\n\\n| Method | Machiavellianism | Narcissism | Psychopathy |\\n| ------------------ | ---------------- | ---------- | ----------- |\\n| PPO | 1.52 | 1.96 | 1.90 |\\n| DPO | 1.22 | 2.08 | 1.79 |\\n| Prompt-MORL | 1.15 | 1.99 | 1.76 |\\n| Personalized-Soups | 1.11 | 1.95 | 1.77 |\\n| Few-Shot | 1.04 | 1.89 | 1.80 |\\n| P^2 | 1.02 | 2.11 | 1.93 |\\n| PAS (Ours) | **1.01** | **1.84** | **1.62** |\\n\\n\\nThe combined framework of Big Five and Dark Triad traits now offers a more comprehensive understanding of personality alignment, spanning from prosocial traits to more challenging behavioral tendencies. This expansion has strengthened our evaluation methodology and broadened the practical applicability of our approach. It also further demonstrates the broad potential of our PAS approach to apply directly to more personality dimensions and combine with more personality research to achieve AI assistants that meet more personalized preferences. The Section 1, Section 3 and Section 5 has been revised to reflect this, as it has helped us develop a more robust and comprehensive framework for personality alignment.\"}" ] }
0CvJYiOo2b
Revisiting PCA for Time Series Reduction in Temporal Dimension
[ "Jiaxin Gao", "Wenbo Hu", "Yuntian Chen" ]
Deep learning has significantly advanced time series analysis (TSA), enabling the extraction of complex patterns for tasks like classification, forecasting, and regression. While dimensionality reduction has traditionally focused on the variable space—achieving notable success in minimizing data redundancy and computational complexity—less attention has been paid to reducing the temporal dimension. In this study, we revisit Principal Component Analysis (PCA), a classical dimensionality reduction technique, to explore its utility in temporal dimension reduction for time series data. It is generally thought that applying PCA to the temporal dimension would disrupt temporal dependencies, leading to limited exploration in this area. However, our theoretical analysis and extensive experiments demonstrate that applying PCA to sliding series windows not only maintains model performance but also enhances computational efficiency. In auto-regressive forecasting, the temporal structure is partially preserved through windowing, and PCA is applied within these windows to denoise the time series while retaining their statistical information. By preprocessing time series data with PCA, we reduce the temporal dimensionality before feeding it into TSA models such as Linear, Transformer, CNN, and RNN architectures. This approach accelerates training and inference and reduces resource consumption. Notably, PCA improves Informer training and inference speed by up to 40% and decreases GPU memory usage of TimesNet by 30%, without sacrificing model accuracy. Comparative analysis against other reduction methods further highlights the effectiveness of PCA in enhancing the efficiency of TSA models. Code is provided in the supplementary materials.
[ "principal component analysis (PCA)", "time series classification", "time series forecasting", "time series extrinsic regression" ]
Reject
https://openreview.net/pdf?id=0CvJYiOo2b
https://openreview.net/forum?id=0CvJYiOo2b
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xlJjAELqgA", "wogolSeerE", "vpWScS8ojt", "rmEARUBm5L", "qXk2eEAnVd", "oeDQeFmBgn", "lghebLjSFT", "j7Ao4ol8I5", "fiT60sweQ6", "WWcPAqDoBx", "SDwVODjQCL", "Rt2DXFQPWJ", "Q84iWdkN43", "PV9Rp2OX53", "OkMbdu5YZf", "G5WN7wG3Tm", "EpkCLPRENc", "DAQPUhpEU1", "8WrpsN5PYp", "5dCH2UuokW", "3Vgla7bfth", "1fJMTylJpb", "1AIf9fK1yR" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1733110185813, 1732285393667, 1732594157366, 1732286350433, 1732287280191, 1732639588411, 1732781009131, 1732288950356, 1730456432302, 1732781402392, 1732287845266, 1732781156807, 1732288326308, 1732781448888, 1730690286454, 1733110299038, 1729971248029, 1732287419342, 1737523801837, 1730210511154, 1732289262397, 1732289331890, 1734800528584 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6918/Area_Chair_GndG" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Reviewer_EsNx" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Reviewer_bfos" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Reviewer_bfos" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Reviewer_EsNx" ], [ "ICLR.cc/2025/Conference/Submission6918/Area_Chair_GndG" ], [ "ICLR.cc/2025/Conference/Submission6918/Reviewer_ohhk" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6918/Reviewer_KQnv" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Authors" ], [ "ICLR.cc/2025/Conference/Submission6918/Area_Chair_GndG" ] ], "structured_content_str": [ "{\"title\": \"Please provide a response to the authors of submission 6918\", \"comment\": \"Dear Reviewer KQnv,\\n\\nThe discussion period is almost over, so please read the responses the authors of submission 6918 have provided to your review.\\n\\nPlease specify which of your concerns were addressed and explain your decision to update or not update your score.\\n\\nAll the best,\\n\\nThe AC\"}", "{\"comment\": \"We deeply thank the valuable review and do our best to address questions and weaknesses.\", \"w1\": \"The purpose of PCA.\", \"a1\": \"Thanks for your comments. In our study, PCA is employed as a pluggable, general-purpose preprocessing technique for time series analysis (TSA), which can be integrated with various TSA models and applied to different downstream TSA tasks. While the early layers of neural networks may have similar effects, it's necessary to design different early layers for different neural networks to ensure that these layers can effectively extract series features and reduce dimensionality. Additionally, adding a dimensionality reduction layer at the beginning of the existing neural network might increase the training/inference burden and raise the risk of overfitting. Furthermore, we also experimented with adding a linear/1D-CNN dimension reduction layer before the original neural network to achieve dimensionality reduction. However, the experimental results shown in Table 6 indicate that their performance is inferior to that of PCA-based dimensionality reduction.\", \"w2\": \"Comparation with other frequency-based dimensional reduction techniques.\", \"a2\": \"Thanks you for your useful suggestions, and we have compared PCA with FFT and DWT as your suggestion. In the experiments, the original series is first transformed from the time domain to the frequency domain using either FFT or DWT. The top k frequency components (where k is 48, the same as the number of principle components) are then selected and input into the TSA models. The results are shown in Table A. It is evident that the top k frequency components obtained using FFT or DWT fail to accurately capture the key information in the original series and effectively compress the series, leading to a significant decrease in model performance. We have also included these contents in Section G of the Supplemental Materials.\", \"table_a\": \"Comparison of PCA with FFT and DWT as series reduction methods. Bold font represents the superior result.\\n|Method|||Linear||PCA||FFT||DWT| \\n|-|-|-|-|-|-|-|-|-|-|\\n|Dataset|Length|MSE|MAE|MSE|MAE|MSE|MAE|MSE|MAE|\\n|ETTm1|96|**0.028**|**0.125**|0.029|0.126|2.110|1.328|1.827|1.299|\\n||192|0.043|0.154|**0.042**|**0.151**|2.086|1.318|1.943|1.344|\\n||336|0.059|0.180|**0.056**|**0.176**|2.205|1.356|1.767|1.279|\\n||720|**0.080**|**0.211**|0.081|0.212|2.232|1.348|1.981|1.358|\\n|ETTm2|96|0.066|0.189|**0.065**|**0.188**|3.417|1.467|1.330|1.010|\\n||192|0.094|0.230|**0.092**|**0.228**|3.883|1.566|1.460|1.068|\\n||336|**0.120**|**0.263**|0.123|0.267|3.273|1.442|1.421|1.049|\\n||720|0.175|**0.320**|**0.174**|**0.320**|3.371|1.465|1.572|1.111|\\n|Better Count|||7||10||0||0|\"}", "{\"comment\": \"Thanks to the authors for your detailed feedback with new empirical results. After very careful consideration, although I appreciate very much the new empirical evidence and agree that it has greatly improved the quality of the paper, I decided to maintain my original score.\\n\\nThe major concern is that I am still not fully convinced by the main motivation, i.e., PCA should be taken as a versatile plug-in before feeding time series into neural networks. The main reason is that usually one would like to have all information fed into the neural network, which is usually carefully designed and will be carefully trained under proper constraints. Performing dimension reduction before feeding the input data into the neural network, is sort of implicitly suggesting that the neural network lacks the ability to identify the necessary information through the noise (which, at the same time, can be removed by the simple PCA method).\\n\\nI would suggest the authors to set up a stronger motivation for the adoption of PCA. One possible direction is considering the unique property of time series datasets, for example, the task-specific datasets are mostly pretty small but still with noise, while training neural networks is also notoriously hard.\"}", "{\"comment\": \"Q1: Title of section 3.2.\", \"a3\": \"Thanks for your helpful suggestion. Using \\\"intuitional justification\\\" as title would be more appropriate and we have updated the title in the revised paper.\", \"q2\": \"More TSC datasets.\", \"a4\": \"Thanks for your comments. We adopted the TSC datasets used in the TimesNet[1] paper and selected series with long length for our experiments (series with short lengths have limited benefit from PCA dimensionality reduction). We have added the experiments on multiple UCR datasets based on your suggestions. The results in Table B show that the application of PCA doesn't decrease accuracy; Instead, it slightly improves performance in some cases. More importantly, the computational cost is greatly reduced, showing the effectiveness of PCA compression processing. This PCA processing aligns with the concept of Pareto optimization, improving or maintaining accuracy while greatly reducing computational resources. We have also included these contents in Section H of the Supplemental Materials.\", \"table_b\": \"TSC experiments on the UCR datasets. The * symbols after models indicate the application of PCA before inputting the series into the models. The accuracy metric is adopted. Bold font represents the superior result.\\n|Dataset|Linear|Linear*|Informer|Informer*|FEDformer|FEDformer*|\\n|-|-|-|-|-|-|-|\\n|ACSF1|0.400|**0.580**|0.640|**0.780**|0.560|**0.730**|\\n|Adiac|0.684|**0.760**|0.538|**0.716**|0.560|**0.729**|\\n|ChlorineConcentration|0.553|**0.771**|0.564|**0.722**|**0.607**|0.544|\\n|Computers|0.536|**0.600**|0.628|**0.640**|**0.830**|0.648|\\n|Earthquakes|0.597|**0.691**|**0.748**|0.719|0.734|**0.755**|\\n|ElectricDevices|**0.482**|0.479|**0.695**|0.605|**0.645**|0.563|\\n|GunPointAgeSpan|0.864|**0.892**|0.889|**0.930**|0.775|**0.892**|\\n|GunPointMaleVersusFemale|0.731|**0.991**|**0.997**|**0.997**|0.706|**0.991**|\\n|GestureMidAirD1|0.477|**0.500**|0.431|**0.515**|**0.692**|0.500|\\n|GestureMidAirD2|**0.485**|0.454|**0.523**|0.400|0.346|**0.415**|\\n|GestureMidAirD3|**0.323**|0.254|**0.377**|0.277|0.231|**0.292**|\\n|AllGestureWiimoteX|**0.296**|0.283|0.289|**0.403**|**0.460**|0.384|\\n|AllGestureWiimoteY|0.319|**0.324**|**0.516**|0.387|0.409|**0.424**|\\n|AllGestureWiimoteZ|**0.320**|**0.320**|0.296|**0.372**|**0.480**|0.366|\\n|FordA|0.504|**0.507**|0.523|**0.817**|0.639|**0.822**|\\n|FordB|0.532|**0.546**|0.549|**0.709**|0.672|**0.685**|\\n|BetterCount|5|12|6|11|6|10|\\n\\n[1] Wu, Haixu, et al. \\\"Timesnet: Temporal 2d-variation modeling for general time series analysis.\\\" arXiv preprint arXiv:2210.02186 (2022).\", \"q3\": \"Accucary of Table 2.\", \"a5\": \"Thanks for your careful observations.The first two datasets are challenging for TSC. We retested them and obtained the same results. We also verified the results reported in the TimesNet[1] and found that their method also struggled with classifying these datasets. We speculate that the following reasons contribute to this phenomenon: First, these datasets have fewer samples, which introduces some randomness in the results. Additionally, different models have varying difficulties in capturing the features of different datasets. For a specific model, PCA preprocessing may make some datasets easier to learn while making others more difficult. However, from the overall results of TSC, TSF, and TSER, PCA preprocessing does not degrade model performance and can accelerate training and inference while reducing memory usage.\", \"q4\": \"Backbone in Table 5.\", \"a6\": \"We apologize for any inconvenience. The backbone model for the results in Table 5 is the Linear model.\", \"q5\": \"Explanation of positive trends.\", \"a7\": \"We apologize for any inconvenience in understanding. Here is our detailed explanations: if we assume all historical windows in the training set exhibit an increasing trend, and we simultaneously change them to a decreasing trend, while keeping the trend of the target series unchanged (also assumed to be an increasing trend), this would not significantly affect the model's learning. Essentially, the model would learn that a decreasing trend in historical series can lead to an increasing trend in future series, rather than an increasing trend leading to an increasing trend. Similarly, applying the same transformation or scaling the periodic information in all historical windows in the training set would not significantly impact the model's learning.\\nThrough these observations, we want to show that the presence of specific trends or periodicities in historical series is not necessarily essential for the learning process of TSA models. Instead, the presence of consistent and coherent patterns is sufficient for models to provide accurate predictions. Therefore, although PCA may alter the trend or periodicity, it introduces new coherent patterns (equivalent to applying the same transformations to all historical windows)\\u2014such as the main directions of variation, denoised low-dimensional representations, and latent features. These new consistent features in the training set enable the model to learn effectively.\"}", "{\"comment\": \"We sincerely appreciate the valuable review and will do our best to address the questions and weaknesses.\\n\\nW1&Q1: Comparation with other representation learning methods.\", \"a1\": \"Thank you for your insightful comments and references. We have carefully read the methods you mentioned. While these representation learning methods are effective for feature extraction and series compression in specific scenarios, they are not general-purpose approaches and cannot be easily integrated with arbitrary time series models or tasks, and obtaining time series representations (compressing time series) is just the initial part of these methods. For example, in the TS2Vec framework, an SVM classifier is required for TSC tasks, while a ridge regression model is needed for TSF tasks. Therefore, these methods are more specialized models rather than pluggable, general-purpose frameworks or modules, which is inconsistent with our goal for PCA. Additionally, the primary objective of these representation learning methods is to learn better representations rather than to accelerate training/inference. As a result, they typically do not optimize for training efficiency or memory usage as extensively as PCA does. As your suggestions, we compared PCA with these representation learning-based methods on TSC tasks. As shown in Table A, the Linear + PCA model achieved the best performance in most settings. Additionally, PCA requires less training/inference overhead and less GPU memory compared to representation learning methods, as shown in Table B. We have also included these contents in Section I of the Supplemental Materials.\", \"table_a\": \"Comparation of PCA with T-Loss, TS2Vec, and TimeVQVAE on TSC experiments. The accuracy metric is adopted. Bold font is the superior result.\\n|| Linear+PCA | T-Loss | TS2Vec | TimeVQVAE |\\n| - | - | - | - | - |\\n| EthanolConcentration | **0.300** | 0.289 | 0.287 | 0.203 |\\n| Handwriting | 0.127 | 0.255 | **0.397** | 0.218 |\\n| SelfRegulationSCP1 | **0.805** | 0.780 | 0.795 | 0.719 |\\n| SelfRegulationSCP2 | **0.539** | 0.511 | 0.525 | 0.527 |\\n| UWaveGestureLibrary | 0.409 | 0.622 | 0.666 | **0.668** |\\n| Better Count | 3 | 0 | 1 | 1 |\", \"table_b\": \"Computational efficiency, and memory usage comparation of PCA with T-Loss, TS2Vec, and TimeVQVAE. Bold font is the superior result.\\n|| Linear+PCA | T-Loss | TS2Vec | TimeVQVAE |\\n| - | - | - | - | - |\\n| Training time (s) | **14.82** | 302.50 | 25.92 | 62.98 |\\n| Inference time (s) | **0.59** | 2.01 | 1.65 | 68.10 |\\n| Memory usage (MiB) | **484** | 1290 | 2424 | 2870 |\\n\\nW2&Q2: Comparation with other classification baselines.\", \"a2\": \"Thanks for your comments. In the top-tier study TimesNet[1], models such as Linear, FEDformer, and Informer are also used for TSC tasks, we also followed their experimental settings and tested the same models and datasets. As your suggestions, we applied PCA to SOTA classification models, InceptionTime[2] and ResNet[3]. The results are shown in Table C. It is evident that PCA is model-agnostic and remains effective even when applied to these SOTA classification models. We have also included these contents in Section J of the Supplemental Materials in the revised paper.\", \"table_c\": \"TSC experiments of Inception and ResNet. The accuracy metric is adopted. The * symbols after models indicate the application of PCA before inputting the series into the models. Bold font is the superior result. PCA preprocessing retains series principal information, matching TSC performance with original series, and enabling training/inference acceleration.\\n| Dataset | Inception | Inception* | ResNet | ResNet* |\\n| - | - | - | - | - |\\n| EthanolConcentration | 0.259 | **0.300** | 0.281 | **0.308** |\\n| Handwriting | 0.075 | **0.119** | 0.076 | **0.105** |\\n| SelfRegulationSCP1 | **0.833** | 0.758 | **0.867** | 0.754 |\\n| SelfRegulationSCP2 | 0.489 | **0.561** | 0.528 | **0.539** |\\n| UWaveGestureLibrary | **0.522** | 0.516 | **0.528** | 0.419 |\\n| Better Count | 2 | 3 | 2 | 3 |\\n\\n[1] Wu, Haixu, et al. \\\"Timesnet: Temporal 2d-variation modeling for general time series analysis.\\\" arxiv preprint arxiv:2210.02186 (2022). \\n[2] https://github.com/TheMrGhostman/InceptionTime-Pytorch/blob/master/inception.py \\n[3] https://github.com/hsd1503/resnet1d/blob/master/resnet1d.py\"}", "{\"title\": \"Comment\", \"comment\": \"I appreciate the effort the authors have put into the rebuttal, and as a result I have increased my score from 3 to 5. However, I cannot recommend acceptance of the paper at this time. The paper remains insufficiently positioned within the broader literature on representation learning, and the additional discussion and experiments provided in the rebuttal are not sufficient to clearly establish a significant contribution.\"}", "{\"comment\": \"Thank you for your response and further suggestions. We understand your concerns. Neural networks indeed possess a high generality; however, many preprocessing techniques can also be effectively combined with neural networks, including those you previously mentioned (DWT, FFT, etc). These preprocessing methods can reduce the learning difficulty for neural networks, leading to improved performance or higher training/inference efficiency. In the context of the TSA tasks we tested, PCA has proven to be an effective preprocessing tool for neural networks.\\n\\nMoreover, our contributions extend beyond proposing a novel, general approach for dimensionality reduction along the time dimension for time series. We also challenge the traditional perception that PCA cannot be applied to data with temporal dependencies. This contribution broadens the applicability of the classic dimensionality reduction method PCA, enabling it to play a more significant role in a wider range of fields.\\n\\nIn light of these contributions, we respectfully hope that you could reconsider your evaluation. Once again, thank you for your response and valuable suggestions.\"}", "{\"comment\": \"Q8: Expression issue.\", \"a8\": \"Thank you for pointing out the issue here. You are correct that our original expression was absolute. We have revised the expression as follows: Therefore, although PCA may alter the trend or periodicity, it introduces new coherent patterns\\u2014such as the main directions of variation, denoised low-dimensional representations, and latent features\\u2014that effectively benefit TSA model learning.\", \"q9\": \"5-run average.\", \"a9\": \"We apologize for not clarifying this in the paper. All our results are based on a 5-run average, which is consistent with other top-tier works (e.g., iTransformer, MR-Diff, DLinear).\", \"q10\": \"Classification performance in Table 2.\", \"a10\": \"Thank you for your careful observation. Despite the accuracy of the TimesNet network decreasing by 23.2% after applying PCA, the accuracy of the FEDformer network increased by 25.0% after applying PCA on the SelfRegulationSCP1 dataset. We speculate that the instability in PCA application on the SelfRegulationSCP1 dataset can be attributed to two main reasons: First, the SelfRegulationSCP1 dataset has fewer samples, which introduces some randomness in the results. Additionally, different models have varying difficulties in capturing the features of different datasets. For a specific model, PCA preprocessing may make some datasets easier to learn while making others more difficult. However, from the overall results of classification, forecasting, and regression, PCA preprocessing does not degrade model performance and can accelerate training and inference while reducing memory usage.\", \"q11\": \"Results in Table 3.\", \"a11\": \"The results for the linear model in Table 3 are taken from Table 9 of study [1] (arXiv version). Additionally, in Table 9 of study [1], the historical window length for Informer and FEDformer is 96, which does not fully leverage their performance capabilities. We retested these models with a historical window length of 336 to better utilize their potential.\\n\\n[1] Zeng, Ailing, et al. \\\"Are Transformers Effective for Time Series Forecasting?.\\\" arxiv preprint arxiv:2205.13504 (2022).\"}", "{\"summary\": \"This paper explores the application of Principal Component Analysis (PCA) for dimensionality reduction of the temporal dimension. The authors argue that PCA's ability to reduce dimensionality enables the extraction of essential features underlying time series, thereby improving the efficiency of downstream tasks. Experimentally, the study applies forecasting, classification, and extrinsic regression tasks to the PCA-learned representations. The results show significant improvements in computational time and memory consumption compared to purely supervised approaches applied directly to the raw time series.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"S1. I think it is interesting to use unsupervised learning as a first step to reduce memory and computation time for downstream supervised tasks. This approach could be particularly beneficial when dealing with large amounts of time series data with numerous timestamps.\", \"S2. The paper is well written and easy to follow, making the concepts and methods presented clear and accessible to the reader.\", \"S3. The experimental results show that PCA accelerates both the training and inference processes while reducing the GPU memory usage for the considered downstream tasks.\"], \"weaknesses\": [\"W1. A significant weakness of the paper is its lack of discussion and comparison with other representation learning methods.\", \"Several claims in the paper appear to be inaccurate, such as: \\\"To the best of our knowledge, there has been no systematic method for compressing time series data in the temporal dimension while preserving critical information\\\" and \\\"far less attention has been given to reducing the temporal dimension, despite the potential benefits of alleviating the burdens associated with processing long time series.\\\" In recent years, various unsupervised time series methods have effectively addressed this issue. For example, T-Loss [1] was one of the first models to fully compress the temporal dimension by leveraging contrastive learning and an encoder-only architecture. Another contrastive method, TS2Vec [2], learns representations that can be used for forecasting and classification in subsequent stages. Additionally, methods based on autoencoders with vector quantization [3,4] have demonstrated the ability to compress the temporal dimension by learning the core features of time series data.\", \"The use of PCA representation does not appear to enhance the performance of the supervised model. While the authors argue that PCA representation accelerates training and inference (and reduces memory usage), the omission of other representation learning methods\\u2014such as a basic convolutional encoder-decoder\\u2014makes it difficult to fully evaluate the contribution of this paper.\", \"W2. From an experimental perspective, several aspects seem questionable.\", \"For the classification tasks, the authors selected a few datasets from the UEA and applied PCA pairs with models that are primarily known as forecasting baselines (except for TimesNet). The reported results, whether with or without PCA, do not represent state-of-the-art performance. It would have been beneficial to include models like InceptionTime or a simple ResNet for comparison.\", \"In the forecasting tasks, the authors focused solely on the ETT datasets, which are recognized for their difficulty in forecasting. It would be more insightful to conduct similar experiments on datasets such as traffic or electricity, which may provide additional context and validation for the proposed methods.\", \"[1] Unsupervised scalable representation learning for multivariate time series, Neurips 2019\", \"[2] Ts2vec: Towards universal representation of time series, AAAI 2022\", \"[3] Vector Quantized Time Series Generation with a Bidirectional Prior Model, AISTATS 2023\", \"[4] Interpretable time series neural representation for classification purposes, IEEE DSAA 2023\"], \"questions\": [\"Q1. Could you please include a comparative analysis section in their paper, directly comparing PCA's performance, computational efficiency, and memory usage against the methods you mentioned (T-Loss, TS2Vec, and autoencoder-based approaches). This would provide a clearer context for evaluating PCA's contribution relative to recent advances in the field.\", \"Q2. Coud you please include state-of-the-art classification models like InceptionTime and ResNet in their comparison for the classification tasks, providing a stronger baseline for evaluating PCA's impact.\", \"Q3. I Believe that it would be valuable to expand the forecasting experiments to include additional widely-used datasets such as traffic and electricity, alongside the ETT datasets. Suggest specific datasets that are commonly used in the field for benchmarking.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer KQnv,\\n\\nWe would appreciate it if you could let us know whether our responses have adequately addressed your concerns. We are happy to address any further questions or concerns you may have. Thank you!\\n\\nBest wishes, \\nPaper6918 Authors\"}", "{\"comment\": \"We deeply appreciate the insightful review and will do our best to address the questions and weaknesses.\\n\\nW1&Q1: Dimensionality reduction along the time dimension.\", \"a1\": \"We apologize for any confusion. We would like to clarify that throughout the paper, PCA is specifically applied for dimensionality reduction along the time dimension in time series data. All our experiments involve using PCA to reduce the dimensionality of the original time series (compressing the time steps) before feeding the reduced series into the models. For classification tasks, the original series lengths (time steps) range from 152 to 1751, which we reduce to 16, 48, or 96. For forecasting tasks, the original series length is 336, which we reduce to 48. For extrinsic regression tasks, the original series lengths are 84 or 266, which we reduce to 16 and 48, respectively.\\n\\nW2&Q2: PCA in classification tasks.\", \"a2\": \"We apologize for any inconvenience in understanding. For the three TSA tasks, PCA is initially fitted on the training data. During inference, the pre-fitted PCA model is applied to the test data without re-fitting, resulting in a significant reduction in inference time, as demonstrated in Figure 4 and Table 9.\\n\\nW3&Q3: Description of the related work.\", \"a3\": \"Thank you for your comments. We have re-read the paper carefully, but we did not find errors in our understanding or description of the paper. First, the experiments in the paper use multiple variables to predict a single target variable. Second, the method in that paper does not process the target variable series; instead, it reduces the dimensionality of the M covariate series to P series, without shortening the length of each series. Third, we merely speculate that other covariates series may have limited utility for predicting the target variable, and there are many studies [1,2] show that channel-independent is more effective for time series forecasting. However, we did not claim that the covariate series are definitely useless for predicting the target series. We would greatly appreciate it if you could provide detailed feedback on any inaccuracies in our descriptions and offer specific reasons.\\n\\n[1] Zeng, Ailing, et al. \\\"Are Transformers Effective for Time Series Forecasting?.\\\" arxiv preprint arxiv:2205.13504 (2022). \\n[2] Nie, Yuqi, et al. \\\"A time series is worth 64 words: Long-term forecasting with transformers.\\\" arxiv preprint arxiv:2211.14730 (2022).\", \"q4\": \"Figure 3 and Dimensionality reduction along the time dimension.\", \"a4\": \"We apologize for any confusion. We would like to respectfully clarify that in our paper, PCA is applied to the time dimension rather than the feature dimension. In our study, each time step is considered a 'feature' for PCA process. Additionally, Figure 3(a) shows the effect of applying PCA to the series and then inversely transforming them back to the original series. The PCA-inversed series is significantly smoother than the original series, indicating that PCA effectively filters out noise while preserving essential features. Figure 3(b) is intended to demonstrate that the distribution of statistical characteristics of the PCA-inversed series are similar to those of the original series. Neither of these subplots aims to demonstrate the effectiveness of PCA in reducing dimensionality along the feature dimension or the time dimension.\", \"q5\": \"Figure 3 and impact of the number of dimensions on model\\u2019s performance.\", \"a5\": \"Thanks for your comments. Figure 3 is a schematic diagram intended to illustrate the denoising effect of PCA and its ability to retain the statistical information of the original series, rather than to present experimental results. The Impact of the number of dimensions k on model\\u2019s performance is illustrated in Figure 6 of the Supplemental Materials. In the updated version, we have integrated the Supplementary Material with the main text. From Figure 6 we can see that as the number of principal components k increases, the importance of the selected features also increases, but the rate of increase diminishes. However, after k reaching to 48 (the number chosen in our experiment), further increasing k results in minimal change in feature importance.\"}", "{\"comment\": \"Thank you for your response, further suggestions, and increasing our score. Our study aims to propose a general preprocessing method for time series, which we think has clear differences from representation learning-based approaches, and thus we did not extensively analyse related work in the field before. Based on your suggestion, we have added a comparison with the typical representation learning-based approaches in terms of performance and training/inference efficiency, which has further improved the quality of our paper. Moreover, our contributions extend beyond proposing a novel, general approach for dimensionality reduction along the time dimension for time series. We also challenge the traditional perception that PCA cannot be applied to data with temporal dependencies. This contribution broadens the applicability of the classic dimensionality reduction method PCA, enabling it to play a more significant role in a wider range of fields.\\n \\nGiven these contributions, we kindly hope you will consider raising our score. We greatly appreciate your constructive feedback and have made every effort to integrate your suggestions to improve the clarity and quality of our work. Thank you for your time and thoughtful review.\"}", "{\"comment\": \"Q6: Figure 3 and more datasets.\", \"a13\": \"Thanks for your suggestions. We conducted tests on more datasets and found that PCA has similar effects in denoising and retaining the statistical information of the original series. Specifically, we have added the experiments on multiple UCR datasets for TSC tasks. The results in Table A show that PCA preprocessing retains the principal information of the series on the UCR datasets, matches the TSC performance of the original series, and enables faster training/inference. And we also applied PCA to the commonly used TSF datasets, Electricity and Traffic. The results in Table B show that PCA preprocessing retains series principal information on Electricity and Traffic datasets, matching TSF performance with original series, and enabling training/inference acceleration.\", \"table_a\": \"TSC experiments on UCR datasets. The * symbols after models indicate the application of PCA before inputting the series into the models. Accuracy metric is adopted. Bold font represents the superior result.\\n|Dataset|Linear|Linear*|Informer|Informer*|FEDformer|FEDformer*|\\n|-|-|-|-|-|-|-|\\n|ACSF1|0.400|**0.580**|0.640|**0.780**|0.560|**0.730**|\\n|Adiac|0.684|**0.760**|0.538|**0.716**|0.560|**0.729**|\\n|ChlorineConcentration|0.553|**0.771**|0.564|**0.722**|**0.607**|0.544|\\n|Computers|0.536|**0.600**|0.628|**0.640**|**0.830**|0.648|\\n|Earthquakes|0.597|**0.691**|**0.748**|0.719|0.734|**0.755**|\\n|ElectricDevices|**0.482**|0.479|**0.695**|0.605|**0.645**|0.563|\\n|GunPointAgeSpan|0.864|**0.892**|0.889|**0.930**|0.775|**0.892**|\\n|GunPointMaleVersusFemale|0.731|**0.991**|**0.997**|**0.997**|0.706|**0.991**|\\n|GestureMidAirD1|0.477|**0.500**|0.431|**0.515**|**0.692**|0.500|\\n|GestureMidAirD2|**0.485**|0.454|**0.523**|0.400|0.346|**0.415**|\\n|GestureMidAirD3|**0.323**|0.254|**0.377**|0.277|0.231|**0.292**|\\n|AllGestureWiimoteX|**0.296**|0.283|0.289|**0.403**|**0.460**|0.384|\\n|AllGestureWiimoteY|0.319|**0.324**|**0.516**|0.387|0.409|**0.424**|\\n|AllGestureWiimoteZ|**0.320**|**0.320**|0.296|**0.372**|**0.480**|0.366|\\n|FordA|0.504|**0.507**|0.523|**0.817**|0.639|**0.822**|\\n|FordB|0.532|**0.546**|0.549|**0.709**|0.672|**0.685**|\\n|Better Count|5|12|6|11|6|10|\", \"table_b\": \"TSF experiments on the Electricity and Traffic datasets. Bold font represents the superior result.\\nMethod||Linear||Linear*||Informer||Informer*||FEDformer||FEDformer*||\\n|-|-|-|-|-|-|-|-|-|-|-|-|-|-|\\nDataset|Length|MSE|MAE|MSE|MAE|MSE|MAE|MSE|MAE|MSE|MAE|MSE|MAE|\\n|Electricity|96|0.213|0.326|**0.212**|**0.325**|**0.307**|**0.391**|0.322|0.413|0.495|0.526|**0.286**|**0.388**|\\n||192|0.241|0.347|**0.240**|**0.344**|0.341|0.420|**0.347**|**0.426**|0.434|0.492|**0.314**|**0.404**|\\n||336|0.275|0.372|**0.273**|**0.369**|0.475|0.515|**0.422**|**0.476**|0.545|0.548|**0.346**|**0.433**|\\n||720|0.312|0.414|**0.306**|**0.409**|0.644|0.611|**0.537**|**0.539**|0.566|0.572|**0.463**|**0.504**|\\n|Traffic|96|**0.138**|**0.229**|0.144|0.237|0.210|0.300|**0.183**|**0.271**|0.265|0.367|**0.186**|**0.285**|\\n||192|**0.141**|**0.231**|0.146|0.238|0.221|0.325|**0.189**|**0.280**|0.270|0.371|**0.191**|**0.288**|\\n||336|**0.142**|**0.236**|0.147|0.244|0.234|0.350|**0.203**|**0.305**|0.288|0.387|**0.219**|**0.311**|\\n||720|**0.156**|**0.251**|0.167|0.265|0.305|0.420|**0.253**|**0.328**|0.305|0.408|**0.230**|**0.336**|\\n|Better Count|||8||8||2||14||0||16|\", \"q7\": \"Explanation of trends and periodic patterns in historical series.\", \"a7\": \"Thanks for your comments. Applying PCA to time series disrupts the original periodicity and trends. However, through experiments, we found that despite this disruption, the model still achieves similar performance as before. We have provided our explanations of this phenomenon: if we assume all historical windows in the training set exhibit an increasing trend, and we simultaneously change them to a decreasing trend, while keeping the trend of the target series unchanged (also assumed to be an increasing trend), this would not affect the model's learning. Essentially, the model would learn that a decreasing trend in historical series can lead to an increasing trend in future series, rather than an increasing trend leading to an increasing trend. Similarly, applying the same transformation or scaling to the periodic information in all historical windows in the training set would not greatly impact the model's learning. Through the observations, we want to show that the presence of specific trends/periodicities in historical series is not necessary for the learning process of TSA models. Instead, the presence of consistent and coherent patterns is sufficient for models to provide accurate predictions. Therefore, although PCA may alter the trend or periodicity, it introduces new coherent patterns (equivalent to applying the same transformations to all historical windows)\\u2014such as the main directions of variation, denoised low-dimensional representations, and latent features. These new consistent features in the training set enable the model to learn effectively.\"}", "{\"comment\": \"Dear Reviewer ohhk,\\n\\nWe would appreciate it if you could let us know whether our responses have adequately addressed your concerns. We are happy to address any further questions or concerns you may have. Thank you!\\n\\nBest wishes, \\nPaper6918 Authors\"}", "{\"summary\": \"This manuscript revisits Principal Component Analysis (PCA) to explore its utility in reducing the temporal dimension of time series data, as a novel area of focus, because PCA has traditionally been applied mostly on the variable space. The paper posits that PCA, when applied to sliding series windows, not only maintains model performance but also enhances computational efficiency. Extensive experiments across time series classification, forecasting, and extrinsic regression tasks substantiate these claims. The paper suggests that PCA preprocessing results in reduced training and inference times without compromising on model effectiveness across various deep learning-based time series analysis models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1. This paper attempts to improve the efficiency of time series analysis tasks, which can be very useful for resource-constrained scenarios including edge computing.\\n\\nS2. The evaluation was conducted on three different tasks, i.e., time series classification, forecasting, and extrinsic regression, demonstrating the versatility of the proposed methods.\\n\\nS3. The paper is easy-to-parse.\", \"weaknesses\": \"W1. It is still hard to conclude from this paper that PCA before feeding into deep neural networks is a versatile solution that should be suggested to time series analysis tasks under resource constraints. Briefly speaking, neural networks, especially the early layers of neural networks, are considered as feature extraction as well as dimension adjusting. This overlaps a bit with the purpose of PCA.\\n\\nW2. There are more dimensional reduction techniques for time series or high-dimensional vectors, rather than PCA itself. For example, DWT, FFT, etc. Although have been briefly discussed, it would still be very important to compare PCA with other deep model-agnostic dimension reduction techniques.\", \"questions\": \"Q1. The discussions provided in section 3.2 are not really theoretical analysis. The section title is a bit misleading. I would suggest to either break this section down to the motivation of the work, or rename it to some candidates like intuitional justification.\\n\\nQ2. In line 340, why only 5 datasets from the UEA is selected, out of 30+ multivariate datasets? Also, the five datasets are finally precessed into univariate datasets, in which case why the original 100+ univariate datasets are excluded?\\n\\nQ3. The accuracy reported in table 2 is pretty low on the first two datasets. And the differences with or without PCA are huge on some cases, e.g., FEDformer and TimesNet on SelfRegulationSCP1. Could the authors provide justification for these numbers? Otherwise, this would damage the versatility of the proposed approach.\\n\\nQ4. What is the backbone model for the results in Table 5?\\n\\nQ5. I am not sure if I fully understood line 302-304, \\u201cFor example, if all positive trends\\u2026\\u201d Could the authors further explain a bit on this for me? Thanks!\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Please provide a response to the authors of submission 6918\", \"comment\": \"Dear Reviewer ohhk,\\n\\nThe above comment to Reviewer KQnv also applies to you, as the authors have provided an extensive response to your review, including additional experimental results.\\n\\nThe discussion period is almost over, so please read the responses the authors of submission 6918 have provided to your review.\\nPlease specify which of your concerns were addressed and explain your decision to update or not update your score.\\n\\nAll the best,\\n\\nThe AC\"}", "{\"summary\": \"The paper investigates using Principal Component Analysis (PCA) to reduce the temporal dimensionality of time series data in deep learning models for tasks like classification, forecasting, and regression. Traditionally, PCA has been applied to reduce variable dimensions, but this study applies PCA across time windows, aiming to maintain temporal structure while reducing redundancy and computational costs. Results show that PCA preprocessing can accelerate training and inference by up to 40% in some models, like Informer, and reduce memory usage by 30% in models like TimesNet without compromising accuracy. PCA also proves effective in noise reduction, retaining essential statistical characteristics and supporting efficient learning across different deep learning models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"It clearly introduces the problem of temporal dimensionality reduction in time series data and provides a solid rationale for using PCA.\", \"The experimental setup is thorough, covering various time series tasks (classification, forecasting, and regression) and a range of model types (Linear, Transformer, CNN, RNN), which effectively illustrates the generalizability of the approach.\", \"The paper strengthens its argument by presenting concrete metrics, such as GPU memory reduction and speed improvements.\"], \"weaknesses\": [\"The intuition behind why PCA is specifically suitable for time series dimensionality reduction is not clearly explained. Many studies have shown that using orthogonal bases (e.g., FFT, wavelets, Legendre polynomials) can improve performance and reduce dimensionality, yet the paper does not address how PCA differs or why these methods were not included in comparisons.\", \"Adding comparisons with modern compression techniques, beyond linear methods and downsampling, could make the evaluation more robust.\", \"Some sections, particularly on the theoretical underpinnings of PCA\\u2019s use for time series, could benefit from clearer explanations to aid reader comprehension.\", \"Each table could benefit from explanations of the metrics used, clarifying what constitutes a \\u201cgood\\u201d or \\u201cbad\\u201d result (e.g., lower MSE is better), which would help readers interpret the results more easily.\", \"More detailed visualizations, such as diagrams showing PCA\\u2019s effects on time series structure and feature retention, could enhance clarity.\"], \"questions\": \"What makes PCA particularly effective here? Is there something unique about the space spanned by its vectors?\\n\\nHow does the dimensionality affect the results? A graph showing MSE versus the number of dimensions (n) would be helpful.\\n\\nWhen selecting the first n eigenvalues, do you choose the largest, the smallest, or select them randomly?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"W2&Q3: TSF experiments on the Electricity and Traffic datasets.\", \"a3\": \"Thanks for your suggestions. As your suggestion, we applied PCA to the commonly used TSF datasets, Electricity and Traffic. The results in Table D show that PCA preprocessing retains series principal information on Electricity and Traffic datasets, matching TSF performance with original series, and enabling training/inference acceleration.\", \"table_d\": \"TSF experiments on the Electricity and Traffic datasets. The * symbols after models indicate the application of PCA before inputting the series into the models. Bold font represents the superior result.\\n Method||Linear||Linear* ||Informer||Informer* ||FEDformer||FEDformer* ||\\n|-|-|-|-|-|-|-|-|-|-|-|-|-|-|\\n|Dataset|Length|MSE|MAE|MSE|MAE|MSE|MAE|MSE|MAE|MSE|MAE|MSE|MAE|\\n|Electricity|96| 0.213 | 0.326 | **0.212** | **0.325** | **0.307** | **0.391** | 0.322 | 0.413 | 0.495 | 0.526 | **0.286** | **0.388** |\\n||192| 0.241 | 0.347 | **0.240** | **0.344** | 0.341 | 0.420 | **0.347** | **0.426** | 0.434 | 0.492 | **0.314** | **0.404** |\\n||336| 0.275 | 0.372 | **0.273** | **0.369** | 0.475 | 0.515 | **0.422** | **0.476** | 0.545 | 0.548 | **0.346** | **0.433** |\\n||720| 0.312 | 0.414 | **0.306** | **0.409** | 0.644 | 0.611 | **0.537** | **0.539** | 0.566 | 0.572 | **0.463** | **0.504** |\\n|Traffic|96| **0.138** | **0.229** | 0.144 | 0.237 | 0.210 | 0.300 | **0.183** | **0.271** | 0.265 | 0.367 | **0.186** | **0.285** |\\n||192| **0.141** | **0.231** | 0.146 | 0.238 | 0.221 | 0.325 | **0.189** | **0.280** | 0.270 | 0.371 | **0.191** | **0.288** |\\n||336| **0.142** | **0.236** | 0.147 | 0.244 | 0.234 | 0.350 | **0.203** | **0.305** | 0.288 | 0.387 | **0.219** | **0.311** |\\n||720| **0.156** | **0.251** | 0.167 | 0.265 | 0.305 | 0.420 | **0.253** | **0.328** | 0.305 | 0.408 | **0.230** | **0.336** |\\n|Better Count| | | 8 | | 8 | | 2 | | 14 | | 0 | | 16 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The authors propose an innovative approach for preprocessing time series data by applying Principal Component Analysis (PCA) to data within sliding windows. This method aims to extract the principal components from the data, effectively reducing dimensionality before feeding it into a deep learning network. Traditionally, it is commonly believed that applying PCA along the time dimension can disrupt temporal dependencies inherent in time series data. Contradicting this notion, the authors suggest that applying PCA to sliding sequence windows can preserve model performance while enhancing computational efficiency. They support their claims with experiments conducted on three primary tasks: time series classification, prediction, and regression.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The idea of applying PCA within sliding windows offers a fresh perspective on dimensionality reduction for time series data. By reducing the dimensionality of the input data, the proposed method can decrease computational load, which is particularly beneficial for deep learning models dealing with large-scale or high-frequency time series data.\", \"weaknesses\": \"The study exhibits several weaknesses. Firstly, it lacks clarity on dimensionality reduction along the time dimension, focusing instead on feature dimension reduction without reducing time steps, which contradicts its stated goal. Secondly, the application of PCA to the test data in classification tasks is ambiguous; applying PCA to test data is inappropriate, but without it, the claimed acceleration in inference time lacks justification. Thirdly, the study misrepresents related work by critiquing standard practices designed to prevent information leakage, and its theoretical analysis fails to support the core claim of time dimension reduction. Additionally, the experiments lack statistical validation and parameter exploration, relying on single runs with fixed principal component numbers, raising concerns about generalizability and potential overfitting. The authors make overgeneralized and absolute claims about the benefits of PCA without sufficient evidence, ignoring observed performance degradation in certain datasets. Furthermore, limited dataset diversity suggests results may be dataset-specific, and discrepancies in reported data raise doubts about reliability. Lastly, the study challenges established concepts in time series analysis without adequate empirical support, and methodological inconsistencies, such as varying the number of principal components without clear rationale, hinder reproducibility and limit the applicability of the findings.\", \"questions\": \"1. The paper does not seem to demonstrate whether the number of time steps is reduced or to what extent. It only generally mentions at the beginning that applying PCA will compress the time series, but it is unclear whether the compression target is the time steps or the data component features within the sliding window. In the Introduction, the authors state that dimensionality reduction techniques for time series data mainly focus on the variable dimension, and they intend to apply PCA for dimensionality reduction along the time dimension. However, from the overall description, the authors appear to only apply PCA to the time-step data within the sliding window to extract local features. This method extracts feature information from each window, but the number of time steps within the window seems to remain unchanged.\\n\\n2. It is currently unclear whether the authors also applied PCA to the test set in the classification task. If the authors used PCA to preprocess the test set, this would be unreasonable because the test data should be assumed to be unknown beforehand. If the authors did not apply PCA to the test set, maintaining the original data format and attributes while keeping the network unchanged, then theoretically, there should not be a significant acceleration in inference time. \\n\\n3. There is an issue with unreasonable descriptions in the related work section. The authors discuss the limitations of Xu et al.'s work titled \\\"Transformer multivariate forecasting: Less is more?\\\" In their second point, they state: \\\"Secondly, it is designed for scenarios where a multivariate series forecasts a univariate series, focusing on reducing the variable dimension of covariate series without preprocessing the target variable series, even if the covariate series may have minimal association with the target series.\\\" \\n\\n4. In the theoretical analysis section, as shown in Figure 3, the authors only demonstrate the effectiveness of PCA in reducing dimensionality along the feature dimension. However, they do not address dimensionality reduction along the time dimension (i.e., the compression of the number of time steps). \\n\\n5. It appears that the experimental results presented in Figure 3 are based on a single experiment. The authors did not verify the generalizability of their results by experimenting with different numbers of principal components, k. Without varying k, there's a risk that the chosen value might be a \\\"lucky number\\\" that coincidentally yields favorable results. \\n\\n6. The experimental results may be influenced by the characteristics of the specific dataset used. The smoothing effect observed in Figure 3 might only be applicable to the current dataset and may not represent the performance on other time series data. Including experiments on a variety of datasets could improve the credibility of the conclusions. \\n\\n7.The authors propose that \\\"Specific trends and periodic patterns in historical series may not be crucial for the learning of TSA models.\\\" In the field of Time Series Analysis (TSA), traditional viewpoints and a large body of research emphasize the importance of trends and periodic patterns. These elements are critical for understanding the inherent structure of the data and for predicting future values. \\n\\n8. The authors' statement, \\\"Therefore, although PCA may alter the trend or periodicity, it introduces new coherent patterns\\u2014such as the main directions of variation, denoised low-dimensional representations, and latent features\\u2014that benefit TSA model learning without negatively impacting predictive performance,\\\" is overly absolute. Claiming that there are no negative impacts is too definitive. In practical applications, any data transformation can potentially have both positive and negative effects on model performance; the specific outcome depends on the characteristics of the data and the type of model employed.\\n\\n9. It seems that the experiments presented in Tables 2, 3, and 4 are based on single runs without any statistical significance testing. There is no indication of whether the results are consistent across multiple trials or if they could be due to random chance. Furthermore, in each experiment, the number of principal components (k) selected for PCA is based on a single value, and this value differs across different datasets.\\n\\n10. In Table 2, concerning the Time Series Classification (TSC) experiments, the authors conclude based on the results: \\\"These results reveal PCA\\u2019s efficacy in extracting series information for TSC tasks without performance loss, enabling faster training/inference.\\\" However, the results presented in Table 2 indicate that applying PCA on certain datasets and networks can lead to significant performance degradation. For example, on the SelfRegulationSCP1 dataset, the accuracy of the TimesNet network decreased by 23.2% after applying PCA. This substantial drop contradicts the authors' absolute assertion of \\\"without performance loss.\\\" Out of the 20 metrics reported, only 10 show performance improvement when PCA is applied, which amounts to just 50%. This proportion raises doubts about the claim made in the abstract that applying PCA to sliding sequence windows can maintain model performance.\\n\\n11.The authors state in Table 3: \\\"The results of Linear are adapted from the study (Zeng et al., 2023).\\\" However, upon reviewing the cited paper by Zeng et al. (2023), I was unable to locate the specific data presented by the authors. This discrepancy raises concerns about the reliability and accuracy of the data used in their experiments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We deeply appreciate the insightful review and will do our utmost to address the questions and weaknesses.\\n\\nW1&W3: Theoretical underpinnings of PCA. \\nThank you for your comments. When we initially discovered that PCA could effectively reduce the temporal dimensionality of time series data, we found it intriguing. We searched for theoretical support in existing literature but did not find any relevant references, possibly due to the counterintuitive nature of our findings. Therefore, we explained this phenomenon through a combination of PCA mechanism analysis and visualization techniques. Specifically, PCA can effectively denoise and retain the primary statistical characteristics of the data. This is supported by the principles of PCA, its applications in other domains, and the visualizations we provide. Regarding why PCA can maintain model performance despite disrupting trends and periodicities, we speculate that the presence of specific trends or periodicities in historical series is not necessarily essential for the learning process of TSA models, Instead, the presence of consistent and coherent patterns is sufficient for models to provide accurate predictions. We offer the following detailed explanation: If we assume that all historical windows in the training set exhibit an increasing trend, and we simultaneously change them to a decreasing trend while keeping the trend of the target series unchanged (also assumed to be an increasing trend), this would not significantly affect the model's learning. Essentially, the model would learn that a decreasing trend in historical series can lead to an increasing trend in future series, rather than an increasing trend leading to an increasing trend. Similarly, applying the same transformation or scaling to the periodic information in all historical windows in the training set would not significantly impact the model's learning. Therefore, although PCA may alter the trend or periodicity, it introduces new coherent patterns\\u2014such as the main directions of variation, denoised low-dimensional representations, and latent features. These new consistent features in the training set enable the model to learn effectively.\\n\\nW1&W2: Comparation with other frequency-based dimensional reduction techniques.\", \"a2\": \"Thanks you for your useful suggestions, and we have compared PCA with FFT and DWT as your suggestion. In the experiments, the original series is first transformed from the time domain to the frequency domain using either FFT or DWT. The top k frequency components (where k is 48, the same as the number of principle components) are then selected and input these into the TSA models. The results are shown in Table A. It is evident that the top k frequency components obtained using FFT or DWT fail to accurately capture the key information in the original series and effectively compress the series, leading to a significant decrease in model performance. We have also included these contents in Section G of the Supplemental Materials.\", \"table_a\": \"Comparison of PCA with FFT and DWT as series reduction methods. Lower MSE/MAE indicates better performance. Bold font represents the superior result.\\n|Method|||Linear||PCA||FFT||DWT| \\n|-|-|-|-|-|-|-|-|-|-|\\n|Dataset|Length|MSE|MAE|MSE|MAE|MSE|MAE|MSE|MAE|\\n|ETTm1|96|**0.028**|**0.125**|0.029|0.126|2.110|1.328|1.827|1.299|\\n||192|0.043|0.154|**0.042**|**0.151**|2.086|1.318|1.943|1.344|\\n||336|0.059|0.180|**0.056**|**0.176**|2.205|1.356|1.767|1.279|\\n||720|**0.080**|**0.211**|0.081|0.212|2.232|1.348|1.981|1.358|\\n|ETTm2|96|0.066|0.189|**0.065**|**0.188**|3.417|1.467|1.330|1.010|\\n||192|0.094|0.230|**0.092**|**0.228**|3.883|1.566|1.460|1.068|\\n||336|**0.120**|**0.263**|0.123|0.267|3.273|1.442|1.421|1.049|\\n||720|0.175|**0.320**|**0.174**|**0.320**|3.371|1.465|1.572|1.111|\\n|Better Count|||7||10||0||0|\", \"w4\": \"Explanations of the metrics.\", \"a4\": \"Thank you for your valuable feedback. We appreciate your suggestion to enhance the clarity of our tables by providing explanations of the metrics used and clarifying what constitutes a \\\"good\\\" or \\\"bad\\\" result. For TSF and TSER tasks, the metrics MSE, MAE, and RMSE are all better when lower. For TSC tasks, the metric Accuracy is better when higher. In the revised paper, we have indicated this in the captions of tables.\", \"w5\": \"Detailed visualizations.\", \"a5\": \"Thanks for your comments. The detailed effects of PCA are illustrated in Figure 7 of the Supplemental Materials. We previously misunderstood and thought that the Supplementary Material needed to be submitted separately from the main text, so we placed it in a zip file. In the updated version, we have integrated the Supplementary Material with the main text. From Figure 7 we can see that PCA series include the primary information of the original series with a small subset of initial values (principal components), while the remaining values exhibit minimal fluctuations.\"}", "{\"comment\": \"Q1: PCA\\u2019s effectiveness.\", \"a6\": \"Thank you for your valuable review. PCA transforms the original series into a different space by projecting them onto a new set of axes defined by the principal components. It retains only the most significant principal components while discarding the less important ones, which serves as a noise filtering mechanism. Additionally, since PCA only performs a spatial transformation, many of the statistical characteristics of the original series, such as mean, peak values, and higher-order moments, are preserved. This ensures that the transformed data retains key properties of the original data, which can be crucial for time series analysis. Moreover, while PCA can filter noise and retain statistical information, it also disrupts the periodicity and trends of the original series. However, we discuss from another perspective that periodicity and trends are not necessarily essential for time series analysis. Instead, the presence of consistent and coherent patterns is sufficient for models to provide accurate predictions.\", \"q2\": \"Impact of the number of dimensions on model\\u2019s performance.\", \"a7\": \"Thanks for your comments. The Impact of the number of dimensions on model\\u2019s performance is illustrated in Figure 6 of the Supplemental Materials. In the updated version, we have integrated the Supplementary Material with the main text. From Figure 6 we can see that as the number of principal components increases, the importance of the selected features also increases, but the rate of increase diminishes. However, after the number of principal components reaching to 48 (the number chosen in our experiment), further increasing the number of principal components results in minimal change in feature importance.\", \"q3\": \"Eigenvectors.\", \"a8\": \"When selecting eigenvectors, we choose the largest n eigenvectors because they correspond to the directions of maximum variance in the data. By retaining these eigenvectors, we ensure that the transformed data captures the most significant features and information, thereby effectively reducing the dimensionality while minimizing information loss.\"}", "{\"metareview\": \"The paper studies the application of PCA to data consisting of sliding windows from a time series, followed by the application of time series models to the dimensionality reduced data.\\n\\nThe reviewers appreciated that the approach is pertinent, that several tasks were considered and that the memory and computation time were reduced. However, serious concerns were raised about the claims of a theoretical analysis, the lack of comparison against other dimensionality reduction techniques for time series, the limited number of datasets considered and missing statistical significance.\\n\\nWhile the authors have addressed some of the issues with additional experimental results on more datasets and comparisons with other methods, this was not sufficient to persuade the reviewers that the work is ready for publication, due to its insufficient positioning with respect to related work.\\n\\nAs I agree with the reviewers\\u2019 assessment, I do not recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"In the discussion, although the authors provided additional results, they ultimately failed to convincingly position their paper in the context of existing work. In the end, all 4 reviewers have opted to reject the paper, though only 2 of them participated in the discussion; the other 2 did not read the response even after being prompted to do so.\"}" ] }
0CtIt485ew
Brain-inspired continual pre-trained learner via silent synaptic consolidation
[ "Xuming Ran", "Juntao Yao", "Yusong Wang", "Mingkun Xu", "Dianbo Liu" ]
Pre-trained models have demonstrated impressive generalization capabilities, yet they remain vulnerable to catastrophic forgetting when incrementally trained on new tasks. Existing architecture-based strategies encounter two primary challenges: Firstly, integrating a pre-trained network with a trainable sub-network complicates the delicate balance between learning plasticity and memory stability across evolving tasks during learning. Secondly, the absence of robust interconnections between pre-trained networks and various sub-networks limits the effective retrieval of pertinent information during inference. In this study, we introduce the $\textit{Artsy framework}$, inspired by the activation mechanisms of silent synapses via spike-timing-dependent plasticity observed in mature biological neural networks, to enhance the continual learning capabilities of pre-trained models. The Artsy framework integrates two key components: 1) During training, the framework mimics mature brain dynamics by maintaining memory stability for previously learned knowledge within the pre-trained network while simultaneously promoting learning plasticity in task-specific sub-networks. 2) During inference, artificial silent and functional synapses are utilized to establish precise connections between the pre-synaptic neurons in the pre-trained network and the post-synaptic neurons in the sub-networks, facilitated through synaptic consolidation, thereby enabling effective extraction of relevant information from test samples. Comprehensive experimental evaluations reveal that our model significantly outperforms conventional methods on class-incremental learning tasks, while also providing enhanced biological interpretability for architecture-based approaches. Moreover, we propose that the Artsy framework offers a promising avenue for simulating biological synaptic mechanisms, potentially advancing our understanding of neural plasticity in both artificial and biological systems.
[ "Continua learning; Silent synapse; Pre-trained model; neuroscience-inspired method" ]
https://openreview.net/pdf?id=0CtIt485ew
https://openreview.net/forum?id=0CtIt485ew
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ufAJREgFuC", "dNk03ucBaT", "MdKSvjSSRl", "D3MsLNZojv", "B0OY4SBE7B" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730131587081, 1730658882404, 1730483235514, 1730428711807, 1732471769316 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6324/Reviewer_dTcH" ], [ "ICLR.cc/2025/Conference/Submission6324/Reviewer_Xigk" ], [ "ICLR.cc/2025/Conference/Submission6324/Reviewer_k1G8" ], [ "ICLR.cc/2025/Conference/Submission6324/Reviewer_vQei" ], [ "ICLR.cc/2025/Conference/Submission6324/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors introduced the Artsy framework, designed to enhance the continual learning capabilities of pre-trained models, addressing their vulnerability to catastrophic forgetting when incrementally trained on new tasks. Using their framework, the authors are able to achieve state-of-the-art performances on class-incremental learning tasks. Furthermore, this framework offered a promising avenue for simulating biological synaptic mechanisms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The author's approach to constructing a pre-trained model inspired by the activation mechanisms of silent synapses is commendable.\\n2. The overall readability of the article is strong and easy to follow.\\n3. The use of artificial silent and functional synapses establishes precise connections between networks, enhancing the extraction of relevant information during inference.\", \"weaknesses\": \"1. The overall experiments have some shortcomings, as only two common datasets, CIFAR-100 and TinyImageNet, were used.\\n2. Although the authors emphasize biological synaptic mechanisms throughout the paper, corresponding results are not observed in the results section.\\n3. The authors mention that pre-trained artificial neural networks lack generalization capabilities, but they do not conduct corresponding experiments to address this issue.\\n4. We would have appreciated a more detailed exploration of how the authors intend to enhance the model based on synaptic mechanisms, accompanied by a mathematical description of these processes. Regrettably, the current explanation remains overly simplistic.\", \"questions\": \"1. Although the methods section of this article is described very clearly, many details are not introduced. For example, the pre-trained network E0(\\u22c5).\\n2. The author needs to explain the mathematical mechanisms underlying artificial synapses.\\n3. The experiments are too weak, relying solely on two commonly used datasets (CIFAR-100 and TinyImageNet).\\n4. The author should include additional experiments that provide interpretability to highlight the advantages of the biological mechanisms.\\n5. The author should also provide some efficiency metrics to demonstrate the superiority of the model.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces the Artsy framework, which enhances continual learning in pre-trained models by mimicking the activation mechanisms of silent synapses via spike-timing-dependent plasticity observed in biological neural networks. The framework maintains memory stability for previously learned knowledge in the pre-trained network while promoting learning plasticity in task-specific sub-networks during training. During inference, it uses artificial silent and functional synapses to connect pre-synaptic neurons in the pre-trained network with post-synaptic neurons in the sub-networks, enabling effective information extraction. Experimental results show that Artsy outperforms conventional methods on class-incremental learning tasks and offers better biological interpretability than other solutions to mitigate catastrophic forgetting.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents an interesting and original idea by leveraging biological mechanisms of learning to enhance AI models, specifically focusing on the activation mechanisms of silent synapses through spike-timing-dependent plasticity. Given that the biological brain exhibits minimal effects of catastrophic forgetting compared to AI models, seeking inspiration from neurobiological learning mechanisms is a promising and innovative research direction. While initial explorations exist in the literature, there remains substantial room for further innovative research in this area.\\n\\n2. Although some additional details are necessary for a complete explanation and reproducibility of the experiments, the authors have made a commendable effort in describing the framework by providing both training and inference algorithms and biological motivation.\\n\\n3. The results, while needing a few more details for complete clarity, appear promising and suggest that the Artsy framework outperforms conventional methods on class-incremental learning tasks.\", \"weaknesses\": \"\\u2022\\tThere are other literature works that propose biologically inspired solutions to mitigate catastrophic forgetting (see below), which are not covered in the background nor related work sections. I suggest adding these references to provide a more comprehensive context for the proposed framework in the neuroscientific context.\\n\\n\\u2022\\tSome parameters needed for reproducibility of the results (incl. number of parameters, type of connectivity e.g. for silent and functional synapses) are not reported. For instance, the paper does not mention how many artificial synapses are used for each subnetwork or whether there is e.g. all-to-all vs sparse connectivity.\\n\\n\\u2022\\tThe study mentions limitations of other algorithms in the background section regarding efficiency and computational time. However, the authors do not discuss these features of the Artsy framework compared to other algorithms. Efficiency, model complexity and computational time are important aspects that the authors should quantitatively analyze (or at least provide estimates for) to explain the performance vs efficiency tradeoff.\\n\\n\\u2022\\tA potential limitation of the framework is the potential increase in the number of subnetworks and artificial synapses with the addition of more classes. This could pose scalability issues and raise questions about the biological plausibility of the framework. I suggest that the authors provide more information and comments on this aspect.\\n\\n\\u2022\\tThe paper does not provide a link to the code, which is essential for reproducibility and further validation of the results.\\n\\n\\u2022\\tStandard deviations for the results in Table 1 and Figure 3 and 4 are not shown. These are important for comparing the variability of the frameworks. In addition, the experimental setup lacks clarity regarding the number of runs averaged (for instance, for Figure 4B).\\n\\n\\u2022\\t\\u201cGood\\u201d and \\u201cbad features\\u201d (sec 4.5) are not clearly defined\\n\\n\\u2022\\tThe paper makes a claim that brain lesions causing synaptic disconnections can lead to dementia by disrupting synaptic consolidation but lacks references to support this. Moreover, the connection between artificial synapses of the Artsy framework and brain lesions needs to be made clearer.\\n\\n\\u2022\\tMore targeted explanations of AMPA and NMDA receptors are needed, for example if the relevance of these receptors to short- vs long-term plasticity is related to the dynamics of functional and silent synapses \\n\\n\\u2022\\tThe diagram for Figure 2C could show more than one subnetwork to accurately represent the architecture.\\n\\nIf the above points are clarified, I am happy to revise my score.\\n\\nSuggested references (non exhaustive):\", \"https\": \"//arxiv.org/pdf/2405.09637 , https://www.nature.com/articles/s42256-023-00747-w , https://www.nature.com/articles/s41467-022-29491-2 , https://arxiv.org/pdf/2403.13249 , https://proceedings.mlr.press/v232/madireddy23a.html , https://pubmed.ncbi.nlm.nih.gov/37589728/ , https://proceedings.mlr.press/v162/gurbuz22a/gurbuz22a.pdf\", \"questions\": \"Minor:\\n1.\\tThe type function used for S_t is not specified. I suppose it is a step function, if so could you please confirm and specify?\\n\\n2.\\tWhere does the name \\\"Artsy\\\" originate from? Is it an abbreviation for \\\"ARTificial Synapse\\\"? If so, could you specify this in the paper?\\n\\n3.\\tCan the Artsy framework work in other continual learning settings beyond class-incremental learning (CIL)?\\n\\n4.\\tHow are both the pre-trained network and the initialized subnetworks analogous to the mature brain network? Could you provide examples and references of networks and subnetworks coexisting and connected in the mature brain, but with different dynamics?\\n\\n5.\\tWhat is the rationale behind naming the connections \\\"artificial synapses\\\"?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the problem of catastrophic forgetting of pre-trained models by presenting an architecture for class-incremental learning.\\nThe architecture, the Artsy framework, is inspired by the plasticity of neurons in the brain. \\nArtsy simulates silent and functional synapses. Specifically, (1) the fixed pre-trained network acts as a consolidated memory, (2) the sub-network learns the features of new incrementally available data, (3) artificial synapses interconnect the pre-trained network and sub-networks. \\nHere, artificial synapses represent the silent and functional synapses.\\nThe experimental results show that Artsy achieves superior performance on incremental learning tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"**Originality:** An innovative biologically inspired approach to avoid catastrophic forgetting while using pre-trained models for incremental class learning is presented. The emulation of silent and functional synapses to artificial networks is appraising and novel in the context of pre-trained models.\\n\\n**Quality:** A solid background on the biological foundations necessary to understand the composition of the Artsy framework is given.\\n\\n**Clarity:** The flow of the text is easy to follow. The objective of the paper is clearly stated. All necessary background is given to understand the presented approach. The experimental setup is explained in detail.\\n\\n**Significance:** The presented work is significant to advance the research in the domain of continuous/lifelong machine learning. The biological inspiration highlights this.\", \"weaknesses\": \"**Inconsistencies in formulas** There are some inconsistencies between equations and the presented algorithms. For example, LINE 230, Eq. (1) defines $h_0 = E_0(x)$, while LINE 272, Algorithm 1, uses $F(x)$ and LINE 326, Algorithm 2, uses $E_0(x)$.\\nLINE 323 states $E_0(x) + \\\\sum_{i=1}^{t}E^i(x) * m_i$ which is different from the expression within the parentheses in Eq. (5). Equation (3) and (4) are not explained enough. For example, what is the purpose of $m_t$ in general (apart from determining whether a synapse is silent or functional) and how $c_t$ is learned?\\n\\n**Weak ablation study**\\nThe ablation study uses two different types of features as input to test the performance. For the ablation study per se, for example, it would be meaningful to see the separate contribution of $E_0(x)$ and $\\\\sum_{i=1}^{t}E^i(x) * m_i$ to the performance on the class incremental learning task.\\n\\n**Limited related work** While related work on silent and functional synapses and other approaches for class incremental learning is thoroughly presented, the related work on similar biologically inspired architectures is missing. Here, the comparison with other biologically inspired approaches would be beneficial. As an example, [1] can be considered.\\n\\n[1] German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. Neural Netw., 113(C):54\\u201371, May 2019.\", \"questions\": \"1. How $thr_t$ is selected? Is it learnable?\\n2. What is $h$ in Eq. (5)? Is it $h_0$?\\n3. How is $S_t$ optimized (LINE 283)?\\n4. How often $m_t=0$ or $m_t=1$? \\n5. What does `complete the prototypes for former classes' mean?\\n6. How was it determined that the sub-network is trained for 20 epochs and the artificial synapse is trained for 2 epochs?\\n7. What is a *good feature*? What is a *bad feature*?\\n8. It would be interesting to see the performance on class incremental learning when only the pre-trained model is used. Are such experiments available?\\n9. Will the code be publicly available?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors proposed a continual learning system inspired by dynamical switching between silent and functional synapses in the brain. The actual mechanisms, however, has no real link to biological synapses that is extensively discussed in their study. Instead, their algorithm is a variation of the earlier algorithm referred to as \\u2018EASE\\u2019 in this study. EASE uses a pretrained encoder (visual transformer) as a backbone and trains adaptors to learn down-stream tasks. As each adaptor learns a new distinct task, catastrophic forgetting can be avoided.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors propose an automatic gating process that can turn on or off individual adaptors depending on present inputs. For each adaptor, a distinct MLP is trained to predict if a present input is an in-distribution example. During inference, adaptors are activated only when the corresponding MLPs predict \\u201cmatch\\u201d. Consequently, if all MLPs are perfectly trained and are 100% accurate, only a single adaptor trained for a present input will be activated, and other adapters will be shut down, which means that we can expect a highly accurate prediction.\", \"weaknesses\": \"This proposed gating process is interesting, but the authors\\u2019 own comparison to EASE show that its advantage is marginal. As they used two simple tasks (CIFAR100 and TinyImageNet) to evaluate the newly proposed algorithm, it remains unclear whether the proposed gating mechanism is beneficial for more complex tasks.\\n\\nAs MLPs need to be trained with old and new data, the algorithm proposed in this study requires a type of replay memory, which is not clarified in the paper.\\n\\nThe authors\\u2019 description of the model (e.g., encoder (E_t(x)), prototypes and MLP) also needs improvements. Their study is based on Zhou et al (2024) study that proposes EASE, so the details of their study may overlap with Zhou\\u2019s study, but this does not mean that they do not need to explain their algorithm. They should extend and improve the description of their proposed algorithm for better readability.\", \"questions\": \"Please see comments for \\\"weaknesses\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
0CieWy9ONY
Neural Eulerian Scene Flow Fields
[ "Kyle Vedder", "Neehar Peri", "Ishan Khatri", "Siyi Li", "Eric Eaton", "Mehmet Kemal Kocamaz", "Yue Wang", "Zhiding Yu", "Deva Ramanan", "Joachim Pehserl" ]
We reframe scene flow as the task of estimating a continuous space-time ordinary differential equation (ODE) that describes motion for an entire observation sequence, represented with a neural prior. Our method, EulerFlow, optimizes this neural prior estimate against several multi-observation reconstruction objectives, enabling high quality scene flow estimation via self-supervision on real-world data. EulerFlow works out-of-the-box without tuning across multiple domains, including large-scale autonomous driving scenes and dynamic tabletop settings. Remarkably, EulerFlow produces high quality flow estimates on small, fast moving objects like birds and tennis balls, and exhibits emergent 3D point tracking behavior by solving its estimated ODE over long-time horizons. On the Argoverse 2 2024 Scene Flow Challenge, EulerFlow outperforms all prior art, surpassing the next-best unsupervised method by more than 2.5 times, and even exceeding the next-best supervised method by over 10%. See https://vedder.io/eulerflow for interactive visuals.
[ "Scene Flow", "Neural Prior", "Ordinary Differential Equation", "Reconstruction" ]
Accept (Poster)
https://openreview.net/pdf?id=0CieWy9ONY
https://openreview.net/forum?id=0CieWy9ONY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oG0Tktla1A", "mB2wSa1VZh", "ljQrwUr0Gs", "cMX04DWzYY", "aU3YGPskTo", "Y3nYaGYTLp", "V2VMcM0B8P", "S4VrK1CSMK", "RAKyYqIj8F", "MqmPBX1iiA", "JUPUDfrny3", "D1P6a18mgV", "6EBvLLuu4d", "04S8xp6d9l" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "decision", "official_comment" ], "note_created": [ 1731905980376, 1730795887352, 1730688801548, 1732741937902, 1731905646227, 1731905795882, 1731905755628, 1732204288713, 1734650658768, 1730729530936, 1732660097201, 1729899279036, 1737523566010, 1731905697312 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3266/Authors" ], [ "ICLR.cc/2025/Conference/Submission3266/Reviewer_V3mD" ], [ "ICLR.cc/2025/Conference/Submission3266/Reviewer_DftH" ], [ "ICLR.cc/2025/Conference/Submission3266/Reviewer_DftH" ], [ "ICLR.cc/2025/Conference/Submission3266/Authors" ], [ "ICLR.cc/2025/Conference/Submission3266/Authors" ], [ "ICLR.cc/2025/Conference/Submission3266/Authors" ], [ "ICLR.cc/2025/Conference/Submission3266/Reviewer_XQYd" ], [ "ICLR.cc/2025/Conference/Submission3266/Area_Chair_eA7c" ], [ "ICLR.cc/2025/Conference/Submission3266/Reviewer_pt2w" ], [ "ICLR.cc/2025/Conference/Submission3266/Reviewer_V3mD" ], [ "ICLR.cc/2025/Conference/Submission3266/Reviewer_XQYd" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3266/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for your review. We think your questions have surfaced several valuable points, and we have clarified and improved our presentation and discussion accordingly.\\n\\n**Sharpening the differential equation formalization**\\n\\nTo sharpen our formalization, we have added a full derivation of the differential equation in Supplemental D. In D.1 we perform a derivation from the Laplacian definition of the field ($L$) to our Eulerian formulation of the field ($F$), ultimately presenting $L$ in terms of a differential equation using $F$. In D.2 we then formalize the ability to employ arbitrary start and end times with the diff eq, in D.3 we derive Euler integration over this diff eq to approximate it, and in D.4 we describe replacing $F$ by our neural approximation ($\\\\Theta$). Using this derivation, we present a formal definition for $Euler_\\\\Theta$ in the main text (Section 3, Equation 2).\\n\\nAs part of a broader cleanup, we have updated the title and sharpened Section 3 and 4 to focus more on the theoretical construction and benefits of scene flow as a differential equation.\\n\\n**Implementation details**\\n\\nWe have made EulerFlow\\u2019s implementation details its own section in Supplemental C in order to separate them from the presentation in Section 4. We have added detailed commentary on the implementation details of our various primitives, e.g. KDTree precomputation for some of the ChamferDistance KNN calls, including citations for the used libraries and commentary on relative runtime impacts, and sharing of Euler integration rollouts between steps in the main loss (Equation 3). As part of a push for reproducibility, we will release the code for EulerFlow upon publication.\\n\\nYou are correct that the input of direction into $\\\\Theta$ is a legacy design decision. We removed this discussion in the main text, as we do not believe it provides meaningful value to the method, but we describe it in Appendix C as a matter of transparency.\\n\\n**Different diff eq solver configurations**\\n\\nWe think this is a great direction for future work. Unfortunately, a smaller step size during optimization proportionally increases computation \\u2014 $\\\\Delta t / 2$ requires twice as many steps as $\\\\Delta t$ to express the same trajectory, resulting in greater runtime and greater VRAM usage for gradients.\\n\\nHowever, despite being optimized for $\\\\Delta t$ steps, we can after-the-fact query an optimized EulerFlow representation using any arbitrary solver, including $\\\\Delta t / 2, \\\\Delta t / 4, \\\\Delta t / 8$ Euler integration. We have updated our project page (http://eulerflow.github.io/) to visualize these solver trajectories on our interactive scenes. Qualitatively, these trajectories can sometimes be a bit better, but are just as often egregiously bad; this makes sense given the representation was not optimized to perform well with Euler integration under these settings.\"}", "{\"summary\": \"This paper proposes to represent scene flow as a velocity field using a neural prior. Instead of prior art that directly represents per-pair scene flow as neural prior, the authors alternatively propose to use neural prior to model the partial differential equation (PDE) of the position of the point versus the time interval. This novel velocity representation is interesting and could offer flexibility in dealing with long-term sequences of flow estimations as the authors described in the paper. The authors have also done extensive analysis of the proposed method on Argoverse 2 (and Waymo) datasets, comparing the performance with recent scene flow works, and validating the good performance of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper proposes an interesting idea to represent the scene flow as a velocity field using a neural network, making it very easy to combine the temporal information (time) and the spatial information (position of points).\", \"The authors have done extensive analysis of the proposed method, and have shown different ablation studies to validate the effectiveness of the method.\", \"The proposed method also shows the potential to deal with small objects and emergent flows in robotics scenarios, which could be interesting when applied to highly dynamic environments.\", \"Overall, the writing of the paper is clear, and the visualization is easy to understand.\"], \"weaknesses\": [\"When using the time interval between [-1, 1] for the time encoding, will the proposed method not be able to handle time step outside the range? Given that the representation is a continuous neural network, how does it extrapolate to a longer sequence with the current representation?\", \"When comparing with a method like NSFP, I wondered if the authors could show the results of pure Euler integration of the method and highlight the benefits of wrapping a PDE with a neural network.\", \"The authors mentioned that they only do sequences of length 20, I wondered if the method failed rapidly with the increase of the sequence length. It would be interesting to show an even longer sequence to highlight the arbitrary time query property of the proposed method.\", \"I feel like the authors want to talk about too many things in this paper, so they may overlook the most important part of the method. This method is good at dealing with long-term flow trajectory and has the potential to better capture the small, highly dynamic objects in the scene. The authors could reorganize the motivations and experiments to highlight the advantages of the proposed method.\"], \"questions\": \"- The discussion of the different activation functions (appendix) is indeed interesting. And this could be one of the interesting parts of the ablation study. However, it is strange to see that using the Gaussian non-linear function is yielding very bad performance. Perhaps the spectral width needs to be fine-tuned, especially when the distribution of the lidar scene flow is very unique.\\n\\nPlease also see the above section for detailed comments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a neural representation to optimize scene flow as a discrete partial differential equation of scene geometry over time. Compared to previous method, Neural Scene Flow Prior (NSFP), a method is most related to this work in the neural representation, the proposed method introduces a multi-frame formulation and learns bi-directional three-step Euler integration of the geometry consistency using decoded per-frame scene flow. Compared to previous work, the proposed representation can achieve better performance in Autonomous driving datasets and the authors demonstrate qualitative performance on depth camera input as well.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed scene flow representation is simple and technical sound. Compared to prior work NSFP, extending it to multi-frame and learns a bi-directional consistency scene flow is a very intuitive step forward.\\n2. The performance of this method (both qualitative and quantitative) is impressive. The method can learn very consistent scene flow in trajectory despite not explicitly considering common issues such as occlusion artifacts. As the paper demonstrated, it can tackle well on small objects (with potentially large motions as well).\", \"weaknesses\": \"1. The paper title and introduction is very general and does not provide a precise position of this paper's main contribution. \\\"Scene flow a partial differential equation\\\" has been historically formulated long time ago in many prior paper, e.g. [1] as one examplar reference, and it has been proposed as a continuous representation in one early seminal work [2]. Many related work studied this optimization problem using images input and solved it using differential optimization approaches before. In this paper seems only consider related work in the point cloud space, and beneficially solved in using a neural representation. I will suggests to more precise position their scope and contributions in paper title, introduction and contributions.\\n\\n2. The evaluation dataset in this paper is mostly on autonomous driving datasets though as the method demonstrated, it should also work on other data domain when depth is available. Though real world depth and flow ground truth is hard to get, it won't be too hard if evaluated using a more general synthetic dataset that provide different motion patterns, compared to the type of motion and accuracy that autonomous driving dataset can provide. \\n\\n3. The paper has already discussed the main limitations it section 6.1. Particularly for the last point \\\"EulerFlow does not understand ray casting geometry\\\", it was clear how this has been demonstrated in the current results. It will be good if the authors can provide examples and metrics that reflect the challenge in this aspect.\\n\\n[1] A Variational Method for Scene Flow Estimation from Stereo Sequences, Frederic Huguet and Frederic Devernay, ICCV 2007\\n\\n[2] Three dimensional scene flow, Vedula et al, CVPR 1999\", \"questions\": \"Among all the three points I illustrated in the weakness,\\n\\nFor point 1, I hope the authors can provide concise update on their paper title and contributions in particular for the first bullet time (line 99-100).\\n\\nFor point 2, the current evaluation is sound and maybe sufficient for this paper. I do believe it is nice to more evaluation on non-AV dataset quantitatively that very likely will benefit this method as a baseline for future work in different domain. \\n\\nFor point 3, it will be good if the author can provide specific example (as a figure)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for addressing my questions and concerns in weakness description and question sections. I don't have additional questions or concerns. The paper is very clear about its own limitations and will pave way for future work in this direction. The technical novelties and insights brought by this paper clearly push the state-of-the-art forward. I believe the work can have bigger impact if it can provide quantitative evaluation in additional scenarios and also agree this does not hurt the evaluation in this paper and can be left for future work.\\n\\nGiven the position of the paper, I will remain the original rating above acceptance.\"}", "{\"comment\": \"# Summary of Reviews\\n\\nWe present EulerFlow, an \\u201cinteresting\\u201d (V3mD), \\u201csimple\\u201d (DftH), \\u201cinnovative\\u201d (pt2w), and \\u201ceffective\\u201d (XQYd) reframing of scene flow as the task of estimating a continuous space-time differential equation that describes motion across an entire observation sequence. We perform extensive analysis on Argoverse 2 and Waymo Open (V3mD, pt2w), outperforming both supervised and unsupervised prior art (V3mD). Notably, EulerFlow is particularly effective at estimating flow for small objects (pt2w, XQYd), and presents emergent 3D point tracking behavior (pt2w).\\n\\nWe want to thank the reviewers for their questions and comments, as we believe they have materially improved our paper presentation, including: \\n\\n - [DftH] updating the title of our paper to _Neural Eulerian Scene Flow Fields_ to better highlight our contribution to represent scene flow as a velocity field using a neural prior. \\n\\n - [XQYd, V3mD] updating Section 3 and 4 to improve presentation clarity / reproducibility and include additional implementation details in Appendix C and D. We will release our code to facilitate future work.\\n\\n - [pt2w] visualizing failure cases in Appendix A.3 to better illustrate our method\\u2019s limitations.\"}", "{\"comment\": \"Thank you for your review. We have updated our draft to incorporate your feedback.\\n\\n**Title and framing are too general and insufficiently informative**\\n\\nWe think this is very valuable feedback. Consequently, we have changed the title to _Neural Eulerian Scene Flow Fields_. We feel this better reflects the core of our contribution: of using neural representations to model Eulerian flow fields (see Figure 4 in the updated draft). Unfortunately, while we have updated the PDF\\u2019s title, it appears we cannot update the OpenReview title during discussion, but we will be sure to change that for the camera ready.\\n\\nWe have also sharpened our language around our contributions to make it clear that our novelty comes from formulating scene flow as a differential equation over _all observations_, which differentiates it from prior art including the cited seminal papers.\\n\\n**Evaluation on other (synthetic) datasets**\\n\\nTo our knowledge, there aren\\u2019t good _real-world_ datasets for scene flow outside of Autonomous Vehicles. As we discuss in Supplemental B.3, in addition to often not having long observation sequences, Chodosh et al. [3] point out that synthetic data often has unrealistic scan patterns and object motion when compared to the real world.\\n\\nConsequently, while we are unable to provide quantitative results on the tabletop scene, we feel that the qualitative results (images in the paper and interactive demos on our project page) demonstrates EulerFlow\\u2019s value to domains beyond autonomous vehicles. That said, we, like you, strongly believe that this lack of quantification on real-world data outside of AV is a big limitation for the subfield, and this is an important area of future work to move the subfield forward \\u2013 this subfield means relatively little if not merged into the broader field\\u2019s efforts towards building large intelligent systems.\\n\\n**Clearer depictions of method\\u2019s failures**\\n\\nWe think this is great feedback. We have added Supplemental A.3 that focuses on further illustrating the failures we discuss in Section 6.1.\\n\\n\\n[1] Rigid Scene Flow for 3D LiDAR Scans. Dewan et al. IROS 2016.\\n\\n[2] LaserNet: An Efficient Probabilistic 3D Object Detector for Autonomous Driving. Meyer et al. CVPR 2019.\\n\\n[3] Re-Evaluating LiDAR Scene Flow. Chodosh et al. WACV 2024.\"}", "{\"comment\": \"Thank you for your review. We have updated our draft to incorporate your feedback.\\n\\n**Runtime of Eulerflow**\\n\\nWe agree that, as currently implemented, the optimization speed is prohibitively slow and needs to be significantly improved to be deployed at any reasonable scale. As we point out in Section 6.1, the first NeRF method also took roughly 24 hours to optimize on a single scene, but follow up work on algorithmic, optimization, and engineering improvements have significantly reduced NeRF optimization time.\\n\\nTo further substantiate this for EulerFlow, here's some back of the napkin math:\\n\\nEulerFlow is fully GPU compute constrained (i.e. not memory bandwidth constrained like some applications such as LLM training [1]) during optimization using V100s; given improvements in Float32 throughput, we can reasonably expect at least a roughly 3x speedup by switching to the latest GPUs [2], and with lower precision / sparsity exploration (V100s are Turing based, which have none of the Ampere / Hopper hardware acceleration features), etc we can feasibly get another 3x - 4x improvement.\\n\\nIf, via a combination of above steps, we cut the runtime 10x, i.e. roughly 24 to 2.4 hours, we think we are in the ballpark of being feasible to scale up. ZeroFlow [3] performs distillation from NSFP into a feedforward student at scale; their cited numbers for NSFP are roughly 26 seconds per frame pair, roughly 1.11 hours per ~155 frame sequence. This is only about 2x faster than our hypothetical improved EulerFlow. At current open market cloud H100 prices (roughly 2.50 USD / card / hour), this would be roughly 6 USD / sequence. Given EulerFlow\\u2019s scene flow quality (and its ability to extract long tail flow on out-of-taxonomy objects like birds!), we feel this is very much worth the cost to get good quality pseudolabels.\\n\\n**Hyperparameters for EulerFlow**\\n\\nAn attractive aspect of EulerFlow is, outside of cripplingly bad hyperparameter settings (Figure 11), it works out-of-the-box on new domains. \\n\\nFor our tabletop demos, we took the exact same config for our Argoverse 2 experiments and just fed in our tabletop data and it worked on the first try. EulerFlow will almost certainly work better with domain-specific hyperparameters, but we find that reasonable settings (e.g. Depth 8 or 18) on a new domain work well.\\n\\n**Visual failure cases for EulerFlow**\\n\\nWe think this is a great point; in Supplemental A.3 we have added a figure showcasing EulerFlow\\u2019s failure on the tabletop jack. In keeping with our discussion on EulerFlow's limitations (Section 6.1), we explain why this general class of failures is caused by the lack of ray casting geometry.\\n\\n**EulerFlow handling deformable objects?**\\n\\nWe think this is a great question, as it highlights an important area of contribution: our method itself makes no assumptions about rigidity in the representation or loss functions, so in principle it\\u2019s able to handle deformable objects.\\n\\nWe also demonstrate this in practice with movement of the hand in the _Hand Place in Sink_ scene and flexing of the rod in the _Tennis Ball on a Flexible Rod_ scene on our demo website (https://eulerflow.github.io/) \\u2014 in both cases, the method is able to describe these motions without issue.\\n\\n**Point cloud density (space and time axes)**\\n\\nPoint cloud density improves performance on both the space and time axes, as both enable Chamfer Distance to better approximate the true flow. NSFP discusses the failure cases of Chamfer Distance in the spatial axes; DynamicFusion [4] discusses the importance of having high frame rate sampling in making these reconstruction problems easier because motions are smaller and thus the interpolation problem is less challenging.\\n\\n[1] FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. Dao et al. NeurIPS 2022.\\n\\n[2] Hopper Whitepaper, Nvidia, GTC 2022.\\n\\n[3] ZeroFlow: Scalable Scene Flow via Distillation. Vedder et al. ICLR 2024.\\n\\n[4] DynamicFusion: Reconstruction and Tracking of Non-rigid Scenes in Real-Time. Newcombe et al. CVPR 2015.\"}", "{\"title\": \"Response to the Authors\", \"comment\": \"Dear Authors,\\n\\nThank you for your time and valuable additional clarifications in the rebuttal. I will keep my original positive rating.\\n\\nBest regards,\\nReviewer\"}", "{\"metareview\": \"The paper presents EulerFlow, a framework for scene flow estimation over 3D point clouds, where EulerFlow is a neural network that produces the flow of a given point between two given time steps, the parameters of the network are optimized by minimizing the forward and cycle consistency errors. Experiments are presented on the ArgoVerse scene flow challenge, obtaining state-of-the-art results.\\n\\nThe paper received overall favorable reviews with one accept and three borderline accepts. The key strength of the paper is the large empirical improvements showcased on the ArgoVerse dataset, which was appreciated in many of the reviews.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised concerns on three major fronts:\\n* on the lack of clarity in the technical exposition (XQYd, V3mD), \\n* enormous compute needed to solve the scene flow problem per scene (pt2w, XQYd), and \\n* lack of experiments and ablation studies (DftH)\\n\\nIn the discussion that ensued, authors revised the paper, by changing the title to emphasize the main contribution, added technicalities explaining the formulation better, and fixed some mistakes. The concern of speed is an important one and cannot be rectified within the current setup. \\n\\nOverall, AC agrees with the reviewers that the performance improvements brought out by the proposed method are commendable and thus recommends accepting the paper. Authors should incorporate the reviewers feedback in the camera-ready. AC also finds many issues remaining in the paper listed below, which the authors need to fix.\\n1. Authors need to state clearly the PDE that they are solving, explicitly stating the consistency criteria and assumptions, e.g., are the trajectories continuous? Occlusions or introduction of new objects in the scene are avoided, etc. \\n2. Authors should provide ablation studies on the use of the various losses used, e.g., forward prediction and cycle consistency losses. \\n3. There appears to be errors in the mathematical details. For example, Eq. (1) appears to have mistakes and Eq. 3 has formatting issues. These need to be fixed as well. \\n4. Further, there are many neural PDE formulations proposed for predicting 2D optical flow between images. AC thinks such formulations can be extended to the setting presented in this paper (such as [a, b] below) and thus the paper should clearly provide rationale on how the proposed method is conceptually different and superior to such prior methods. \\n\\n[a] Zhuang, Weihao, et al. \\\"Optical flow regularization of implicit neural representations for video frame interpolation.\\\" APSIPA Transactions on Signal and Information Processing 12.1 (2023): e39.\\n[b] Cho, Seokju, et al. \\\"FlowTrack: Revisiting Optical Flow for Long-Range Dense Tracking.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\"}", "{\"summary\": \"The paper introduces a novel approach to scene flow estimation by reframing it as the task of estimating a continuous space-time partial differential equation (PDE) that describes motion across an entire observation sequence. The proposed method, called EulerFlow, utilizes a neural prior to represent this PDE and optimizes it against multi-observation reconstruction objectives. This approach allows for high-quality, unsupervised scene flow estimation on real-world data, outperforming existing supervised and unsupervised methods, particularly on small and fast-moving objects like birds and tennis balls. The authors demonstrate the effectiveness of EulerFlow on datasets such as Argoverse 2 and Waymo Open Dataset, and show its applicability across different domains without domain-specific tuning. Additionally, they highlight emergent 3D point tracking behavior as a result of solving the estimated PDE over long time horizons.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"I'm not up-to-date to the latest scene flow models, but from the results in the paper it surpass the prior art by a large margin, which is very significant\", \"Introducing the concept of modeling scene flow as a PDE is innovative and offers a new direction for research in motion estimation.\", \"The method is rigorously developed, with comprehensive experiments and ablation studies that validate the approach.\", \"The paper is well-written, with clear explanations and effective use of figures to illustrate key points.\"], \"weaknesses\": [\"As stated in the paper, the speed of the proposed method is a big concern, preventing it from deploying on real world application.\", \"Some hyperparameters, such as the depth of the neural prior, seem to require dataset-specific tuning (e.g., increasing depth to 18 for the Argoverse 2 challenge), which may affect the method's out-of-the-box applicability.\", \"It would be great if the author could show more failure cases to help readers better understand its limitations.\"], \"questions\": [\"Overall I believe this paper is in a good shape, the authors discuss the properties and limitations of the proposed method thoroughly in the paper. I have a few more questions:\", \"How does the method handle scenes with deformable objects?\", \"What is the impact of temporal sampling rate on performance?\", \"How does the point cloud density affect the performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to authors' rebuttal\", \"comment\": \"Thanks for providing a detailed response.\\n\\nI have read all the reviews and comments, I think most of the concerns were addressed by the authors. Therefore, I would like to keep my original positive score.\"}", "{\"summary\": \"The paper proposes SFvPDE, a framework to cast scene flow estimation as a PDE with a neural prior, and EulerFlow, an example demonstrating how SFvPDE can be trained using the Euler method to locally integrate the PDE during training. A space-time-dependent vector-field is trained to match subsequent point clouds at different timestamps via solving the underlying PDE. The method significantly outperformes both supervised and unsupervised baselines and is especially effective on small objects compared to prior work.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The method is simple and intuitive yet effective. Extensive experiments section shows clearly that the proposed method surpasses prior work.\", \"weaknesses\": \"The main weakness of EulerFlow, as also noted by the authors (lines 524-528), is the time it takes to converge on a single scene. But given the performance of the method, this should not be considered critical. However, the presentation of the paper can be improved, the paper lacks some implementation details, as from time to time the reader has to guess what is actually happening (see questions section).\", \"questions\": \"Here are some questions and concerns regarding the presentation and the method:\\n\\n1) In lines 189 and 195 $\\\\frac{\\\\partial L^*}{\\\\partial t}$ is referred to as the partial differential equation, or a PDE. However, $\\\\frac{\\\\partial L^*}{\\\\partial t}$ alone is not a PDE yet, unless it is set equal to something (as in equation 2).\\n2) I guess that in equation 2 SFvPDE should also depend on $x$. Could the authors clarify this?\\n3) In general, it would be nice to have more formal definitions. E.g. in EulerFlow, an exact formula for solving the PDE, $\\\\text{Euler}_\\\\theta(P_t, d, k)$, would improve understanding and reproducibility of the method.\\n4) In principle, the PDE can be integrated in both directions by simply reverting the time. The usage of the direction as an extra argument in the model makes the connection between sections 3 and 4 slightly weaker and seems to be a legacy design choice from NSFP. Thus, a question to the authors is whether they have tried training without the direction argument?\\n5) Given the high computational complexity of the method, it would be better to see some implementation details on how exactly equation 3 is calculated during training. Are any optimizations already incorporated? E.g. in the current form separate terms in the loss are independent. However, I believe that subsequent Euler steps can use previous estimates instead of recalculating them.\\n6) More ablation studies would better highlight the contributions of the paper. E.g. how general and how sensitive is the method to different numerical solvers and sizes of discretization steps? Have the authors tried higher-order PDE solvers or using $\\\\Delta t$ smaller than the time between observations?\\n\\nI will adjust my score based on the other reviews and the rebuttal by the authors.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your review. We have updated our draft to incorporate your feedback.\\n\\n**Sequence length of only 20 / what happens if the sequence is longer?**\\n\\nBy default, EulerFlow is optimized over the _full sequence_ (e.g. ~150 to 160 frames on Argoverse 2). \\n\\nThe 20 frames setup you are referencing comes from a particular comparison of EulerFlow to NTP [1]: in order to perform a fair comparison against NTP, which is only designed to (and only performs well on) shorter trajectories, we evaluated EulerFlow and NTP head to head on these 20 frame subsequences. Figure 10 depicts these results, as well as the extension of EulerFlow to the full sequence length (where full length EulerFlow performs significantly better).\\n\\n**Time interval of [-1, 1]**\\n\\nEulerFlow is doing full sequence motion _reconstruction_, where -1 represents time at the beginning of the sequence and 1 represents time at the end of the sequence. Importantly, our method is not designed to do forecasting outside of the time range, in the same way a Dynamic NeRF is not designed to extrapolate beyond the time range of the given data.\\n\\nHowever, in principle, you could choose to query our differential equation beyond the optimized time horizon (e.g. at time 1.1 or -1.1) and you will receive some extrapolated estimate. To give some additional hint at what might happen when extrapolating, take a look at the point tracking on the tape from the _Ball and Tape_ scene on our demo website (the third scene from the left; https://eulerflow.github.io/) \\u2014 the point being tracked (the tape) disappears from the scene, resulting in point tracking across a region without data support; the point estimate just slowly drifts through space.\\n\\n**Euler integration between successive NSFP estimates**\\n\\nThis baseline is in NTP's evaluations [1]. Unsurprisingly, this baseline is significantly worse than NTP (which EulerFlow in turn significantly outperforms), as NSFP is not optimized to provide multi-frame motion estimates, so chaining outputs across frame pairs together produces catastrophic trajectory artifacts.\\n\\n**Organization improvements**\\n\\nThank you for the suggestion. We have improved the organization Section 3 and 4 and have added additional implementation details to Appendix C. \\n\\n**Choice of non-linearity**\\n\\nWe believe this is indeed a very interesting (and possibly very fruitful) line of future work. At the very least, we think it\\u2019s not on-its-face obvious that ReLUs are the \\u201cright\\u201d nonlinearity for scene flow, and a more careful and theoretically grounded study might find a better one that results in higher quality flow estimates. We chose to include these preliminary results in the Supplementary because they were an early experiment we ran that we thought might provide some signal to the community for what to look at next.\\n\\nIn that vein, while we believe smoothness is an important factor for the ReLU MLP's good performance, we agree that the poor performance of the Gaussian is probably a consequence of poor hyperparameter tuning for spectral width. We believe the right prior plus some other tricks (e.g. learning an offset for the spectral width per layer) will allow it to converge, but we abandoned this effort for this project due to the good performance of the ReLU MLP.\\n\\n[1] Neural Prior for Trajectory Estimation. Wang et al, 2022\"}" ] }
0C5iHPPwsG
Autoencoder-Based General-Purpose Representation Learning for Entity Embedding
[ "Jan Henrik Bertrand", "David B. Hoffmann", "Jacopo Pio Gargano", "Laurent Mombaerts", "Jonathan Taws" ]
Recent advances in representation learning have successfully leveraged the underlying domain-specific structure of data across various fields. However, representing diverse and complex entities stored in tabular format within a latent space remains challenging. In this paper, we introduce DeepCAE, a novel method for calculating the regularization term for multi-layer contractive autoencoders (CAEs). Additionally, we formalize a general-purpose entity embedding framework and use it to empirically show that DeepCAE outperforms all other tested autoencoder variants in both reconstruction performance and downstream prediction performance. Notably, when compared to a stacked CAE across 13 datasets, DeepCAE achieves a 34% improvement in reconstruction error.
[ "customer", "embeddings", "embedding", "tabular", "general", "purpose", "autoencoder", "representation learning", "general purpose", "reconstruction loss", "entity", "entity embedding", "entity representation", "contractive autoencoder", "dimensionality", "reduction", "latent", "space", "representation", "feature", "regularization", "variational autoencoder" ]
Reject
https://openreview.net/pdf?id=0C5iHPPwsG
https://openreview.net/forum?id=0C5iHPPwsG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xBkGqwCdXG", "wFB8X3S64M", "tIQkvzv7nb", "rkhZR4ODw5", "iPEJ1YKLJl", "iOl8LjEbix", "gHK4zm5OJA", "fUnWTepnnu", "ZkvClOKYUU", "XIIX8QKuIc", "UQtQ2dBcfR", "SXckU8FIJw", "RzQf7AE8At", "GgYNipayF3", "FcVKpMGpZi", "F2qW4NZuh1", "Ca4TnSSaEL", "CH2DGuMMPq", "BpiEsEP3Vj", "B16SyFwwM6", "9M5gwhk3Eu", "8xRqOZNe5z", "6zCpv7g9IS", "3nv1AZR1G0", "171iakyQW4" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730656627539, 1732700210344, 1732316026539, 1732315588848, 1729623653653, 1732315529474, 1732958277891, 1732315466038, 1734469131562, 1732315674502, 1730297298128, 1732958328544, 1732315622589, 1732315555797, 1732315383597, 1732315762041, 1732315770760, 1737524012389, 1732315178981, 1732957954789, 1732315621577, 1732316099721, 1732647379222, 1730549236142, 1732886838037 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9893/Reviewer_Y2Xz" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Reviewer_Krnx" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Area_Chair_aZ2V" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Reviewer_PZNQ" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Authors" ], [ "ICLR.cc/2025/Conference/Submission9893/Reviewer_Krnx" ], [ "ICLR.cc/2025/Conference/Submission9893/Reviewer_S38J" ], [ "ICLR.cc/2025/Conference/Submission9893/Reviewer_Y2Xz" ] ], "structured_content_str": [ "{\"summary\": \"In this work the authors extend the Contractive AutoEncoder (CAE) framework for the calculation of the Jacobian\\nof the entire encoder in the contractive loss from single-layer to multi-layer settings ( DeepCAE ).\\n\\n\\nEmpirically over tabular benchmarks, the authors show DeepCAE can be leveraged in a general purpose embedding \\nframework where embeddings are feed to XGBoost to obtain gains in reconstruction performance and comparable/slightly better performance \\ndownstream prediction (classification/regression) performance as compared with various AutoEncoders and Transformer baselines \\n( though not when compared with KernalPCA from a downstream performance perspective ). \\nThey additionally show the noteable reconstruction performance of DeepCAEs compared with Stacked CAEs .\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The authors show how the CAE framework ( an AE with an additional objective component that is the Frobenius norm of the model with respect to the input ) can be extended to a multi-layer setting in a way that is advantageous when compared with prior extensions to CAEs which worked via stacking.\\n\\nThey do an extensive empirical analysis to discuss reconstruction and downstream accuracy benefits of the method while discussing costs of the method (scaling as the method is cubic with respect to layer size ) and its downstream limitations compared to KernalPCA.\\n\\nThe setting ( encoding tabular data with multi-modal data types ) is important and they show how its not handled readily or generally by Transformer type architectures.\", \"weaknesses\": \"1) Time comparisons and particularly error bars needed everywhere ( KernelPCA/StandardAE/DeepCAE ). For your benchmarks are you all running multiple seeds per each?\\n\\nThe main question ( which the authors have pointed out openly in the paper and for future work ) is if the cost of DeepCAEs is worth the effort? They\\u2019ve shown reconstruction is slightly better, but for tasks its pretty comparable to AE and KernalPCA does better (still an interesting finding ). How important is reconstruction loss really for this setup? What is the time complexity of KernalPCA and outside of reconstruction loss being subpar are there other reasons to not use it?\\n\\n2) Are there stronger baselines to compare against both encoding wise and classifier wise (ie, XGBoost vs something else) against if what we care about is tabular performance using embeddings? The former is the more important of the two and there is a NeurIPS workshop on fusing modalities for tabular data thats in its 3rd edition (https://table-representation-learning.github.io ) \\n\\n3) The general purpose embedding pipeline seems like the standard solution to re-using embeddings for downtsream tasks from vector databases and not particular to DeepCAE? Is this the case? If not, it could strengthen the paper to clarify how so if not.\\n\\n4) It would be interesting to either show experiments on or discuss how DeepCAE does on just image or text data as well to compare its reconstruction and downstream task performance there. Is there anything in particular that makes this approach specific to tabular data with multi-modal data? If the method gives performance boosts when encoding image/timeseries/text, it would greatly strengthen the results of the paper and would make incorporating the method.\\n\\n5) While the background on CAE was very much needed, the sections on VAE and Transformers were probably lesser so ( or could have been pushed into the appendix ) especially since you show effectively they are not nearly as effective. This space could/should be used for looking more at KernalPCA vs Standard AE vs DeepCAE costs/tradeoffs and potentially other modalities ( point 4)\\n\\n6) Did you all do experiments against the original CAE? The paragraph starting at 280 made it seem like you would ( and in the final section you do with StackedCAE), but then in the experiments Convolutional AE is used instead which I wasn't expecting. I'm assuming this is shown in past work when Stacked CAEs are introduced but having a sense of that as well would be good since its much cheaper computationally than both Stacked and Deep CAEs.\", \"questions\": \"Q: Did you all empirically assess how much time this k x d^3 adds time-wise compared with just a standard AE? Its an offline computation, but getting a sense of what that tradeoff comes out to time wise for your datasets would be interesting ( ie, is it that big of a hit in the end since the datasets are all below 45k instances each ). How does KernalPCA perform?\", \"q\": \"Is there a reason in particular for using tanh activations in your extension of CAE? I get it allows for the decomposition shown, but are there other activations or ablations which could have been performed ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Additional details on CAE, Significance and Contribution\", \"comment\": \"Thank you for your suggestion on the coloring, we will take this into account in the future.\\n\\n**CAE:** Following the feedback, we added a geometrical explanation to section 2.1 in the paper that introduces CAEs to the reader. We paste the added paragraph here as well for your convenience:\\n\\n> Geometrically, the contraction of the input space in a certain direction of the input\\nspace is indicated by the corresponding singular value of the Jacobian. Rifai et al. (2011b) show that the number of large singular values is much smaller when using the CAE penalty term, indicating that it helps in characterizing a lower-dimensional manifold near the data points.\\n> \\n\\nWe also added details on each variable in Equation 1 in that section. For an even deeper understanding, we refer the interested reader to Rifai et al. (2011b). However, it is not necessary to understand all the details explained in Rifai et al. (2011b) to understand our results. For simplicity, you could take it as a form of regularization specific to representation learning.\\n\\n**Contribution:** Beyond the extension of CAE to our proposed DeepCAE, another contribution of our work is the comprehensive benchmark of representation learning methods for both reconstruction and downstream performance that is to our knowledge and based on our literature review, a novel and far more extensive benchmark beyond anything that existed before in the field.\\n\\n**Significance of results:** Please note that we added error bars indicating a 95% confidence interval to the plots showing the results on reconstruction performance (see Figures 2 and 5). Also note that these plots are scaled logarithmically, which makes DeepCAE and StandardAE look closer than they are, at first glance. They are more than 10% apart in terms of reconstruction performance. This clearly shows how DeepCAE generalizes better compared to all other models in the benchmark. If you are still convinced that these results are not significant against the statistical evidence we provide, we kindly ask you to share specific arguments why you think so.\"}", "{\"title\": \"Answers to Questions 1\", \"comment\": \"1. We agree that the sentence you highlighted needs to be written more clearly and accurately, thank you for pointing it out. We updated the paragraph starting in line 47. \\\\\", \"additional_context_behind_our_reasoning\": \"\\\\\\nWith that paragraph, we want to convey that research in the past decade has mostly focused on representation learning of specific modalities, including LSTMs and Transformers for sequential data and with VAEs, GANs and Diffusion models for image data (**\\u201cthese representation methods\\u201d**). Conversely, we could find only little recent research on classical representation learning that could be applied to tabular data during our extensive literature review. However, we argue that classical tabular data is still used in a numerous amount of real-world applications, highlighting the need for a continuation of research in classical representation learning, especially with respect to commonly used entities such as customers and users. This motivates the introduction of DeepCAE, which picks up classical representation learning, extends the CAE framework to multi-layer settings that are feasible under today\\u2019s compute resources and outperforms state-of-the-art methods in a tabular setting.\\n2. Your question about the specificity of our framework to DeepCAE is now answered in line 054. There we point out that the general purpose embedding framework is not specific to the DeepCAE method. In the paper it is used to benchmark the different representation learning methods, isolating the performance differences to the difference between architectures. However, as the framework is well suited for operations and real world use we introduce it as a contribution.\\n3. We agree with your point on limiting the preliminaries on the other methods. Following your feedback we updated the paper by (a) shortening these tangential introductions and (b) including more details on CAE in Section 2.1 and Section 4.1. Moreover, we agree that there is a lot to say about CAE, and understanding it deeply requires extensive knowledge of representation learning and linear algebra. While we provide a comprehensive overview, it is hard to provide extensive details while respecting page limits and avoiding repetitions from the original CAE paper. Do you believe there are other key properties of CAE we are missing in Section 2.1?\\n4. We updated the paper to include a more detailed description of Equation 1, as well as a definition for all its symbols.\\n5. How CAE learns stable representations is explained in Section 2.1. We also added some geometrical context for a more complete reasoning. The superiority over denoising autoencoders was shown by Rifai et al. 2011b as cited in our paper. If there are still some aspects around CAE that are not clear to you, please let us know.\\n6. Agreed, good point. We rewrote this sentence to make it easier to read:\\n\\u201dwe analyze related work and find that, to the best of our knowledge, all use stacking (Wu et al., 2019; Aamir et al., 2021; Wang et al., 2020). This includes Rifai et al. (2011b), who originally proposed the CAE.\\u201d\\nFollowing your feedback, we also added details on stacking at the end of Section 2.1. A stacked CAE is a series of autoencoders where the embeddings of the first autoencoder are further embedded and reconstructed using the second autoencoder, and so on. Thereby, the contractive penalty term of the encoder is calculated in isolation with respect to the other autoencoders. We are not aware of any variations of this kind of stacking, and provide an implementation as part of the code repository in the supplementary material.\\n7. Thank you for pointing out the lack of reference here, this conclusion comes from the original CAE paper. We updated the paper with the reference in line 191. On the second point about $d_h$, your understanding is correct that its the dimension of the hidden embedding space. We have clarified this in the main text line 193.\\n8. We updated the derivation of DeepCAE starting in line 195 to make our reasoning more clear and increase coherence. \\n9. In the context of entities such as customers there may be descriptive text data concerning the customer (e.g. a descriptive text from the website of a B2B customer), and time series data containing the purchases of a customer each month. We also updated Section 4.2 with examples for text and time series.\\n10. The numeric embeddings of those time series and text data are appended horizontally as additional columns to the original tabular dataset. The datasets that are used for the experiments that we directly report on in the paper are all tabular data only without free text or time series features to be embedded specifically. We included this in Figure 1 to showcase its applicability and mentioned it worked for us in practice.\"}", "{\"title\": \"Answer to Weakness 4\", \"comment\": \"The focus of our work is on classical tabular datasets. DeepCAE is not specifically designed to embed a modality different than tabular such as plain text or images. It is possible to learn representations of flattened images using our method without further modification. This was however not the focus of our work. On the other hand, Transformers exploit the sequential structure in text (syntax and semantics), and CNNs leverage the spatial organization of images. Since CAE\\u2019s performance was evaluated on the MNIST dataset, we included a comparison of MNIST in Table 13 in Appendix A.4.2, which makes our work relatable to the original CAE paper and stacked CAE versions.\"}", "{\"summary\": \"This paper proposes a method called DeepCAE to calculate the regularization term for multi-layer contractive autoencoders and utilizes DeepCAE to power a general-purpose entity embedding framework. The experimental results show that DeepCAE outperformed the baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors conducted sufficient experiments to validate the effectiveness of their proposed method. They also provide code and data to reproduce the results.\", \"weaknesses\": \"I think the prominent issue is the writing. 1) It **contains lots of tangential content** which is unnecessary to elaborate on in this paper. For example, in Section 2, I don't see the point of discussing PCA, variational autoencoders, and transformers, as they are not directly part of or foundational to your methodology. 2) The writing **lacks substantial details and is not self-contained**. This work is largely based on the previous CAE work by Rifai et al. (2021b), but the introduction to that work is incomplete, making it difficult to understand the proposed method and its details. Instead, the authors frequently refer readers to the original work. This problem also exists in the experiment, which did not clearly present the experiment settings. 3) **Lacking of logical coherence**. Some important arguments lack evidential support and are not clearly described.\\n\\nPlease see questions below for exact points.\", \"questions\": \"1. The argument \\\"*Consequently, these representation methods not only are limited, but also fall short or are inapplicable ...*\\\" in line 47-48, do you have evidence to support this claim, such as references or experimental results? Also, what do \\\"*these representation methods*\\\" refer to?\\n2. In the following paragraph starting at line 50, how are the two contributions related? The authors might consider clarifying that the framework is based on their proposed method.\\n3. In section 2, what is the purpose of elaborating on PCA, variational autoencoders, and transformers? If they only serve as baselines, a brief introduction in the experimental setup might suffice. Instead, the focus should be on elaborating CAE in this section on its principle and structure. For examples, the steps to encode the input and decode it, the loss function.\\n4. Eq. (1) lacks detailed description, such as the meaning of its symbols.\\n5. In line 92-93, \\\"*Thanks to their ability to produce stable and robust embeddings, CAE were proven to be superior to Denoising Autoencoders (DAE)*\\\", the real reason behind *produce stable and robust embeddings* and *superior performance* is missing. The authors should make this argument more well-founded.\\n6. In the first paragraph of section 4.1, the sentence \\\"*we analyze related work and find that, to the best of our knowledge, all use stacking Wu et al. (2019); Aamir et al. (2021); Wang et al. (2020), including Rifai et al. (2011b), who originally proposed the CAE.*\\\" is poorly written. Additionally, what kind of \\\"stacking\\\" was used? The authors should elaborate on how previous works implemented this.\\n7. In line 223, how is the conclusion \\\"$O(d_x \\\\times d_h^2)$ to $O(d_x \\\\times d_h)$\\\" derived? I cannot deduce this from Eq. (3) alone. And \\\"*and $d_h$ is the hidden embedding space.*\\\", do you mean $d_h$ is the dimension of the hidden embedding space?\\n8. The explanation of how Eq.(4) is obtained from Eq. (3) is unclear. The entire inference process from Eq. (3) to Eq. (8) lacks coherence.\\n9. In line 274, \\\"*such as text and time-series*\\\" what kind of information is the \\\"text\\\" and \\\"time-series\\\" exactly? Could you provide examples?\\n10. In Figure 1, how exactly do you concatenate the text encoding, TS encoding, and tabular data together?\\n11. In section 5, what are the experimental settings, such as the number of encoder layers and feature dimensions? Additionally, the dataset statistics should be included in the main paper rather than the appendix, as they are important.\\n12. How is the Mean Squared Error (MSE) computed? Is this metric conventionally used in previous work?\\n13. In the first paragraph of section 5.1.2, \\\"*We trained XGBOOST (Chen & Guestrin, 2016) predictors ...*\\\", what is the XGBOOST model, and what exactly is the downstream task?\\n14. In section 7, line 522-524, \\\"*Furthermore, we argue that the augmentative capabilities of more complex architectures like Transformers and CNNs are not necessarily useful in the production of a compact representation of an entity.*\\\", do you have evidence to support this argument?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer to Weakness 2\", \"comment\": \"Encoding-wise, to the best of our knowledge, we integrated all state-of-the-art comparisons. We thank the reviewer for the opportunity to clarify the downstream task comparison, which was also added in the main text: XGBoost was used as a simple, performant, and widely used proxy for downstream performance. Based on the observed results, we do not expect outcomes that are different from those of other downstream models. Thank you for referencing the NeurIPS workshop. While our work is generally task-agnostic, we see opportunities to further share and/or specify our findings.\"}", "{\"title\": \"Invitation to Discussion\", \"comment\": \"We are approaching the end of the discussion period. If there is anything beyond what we already addressed, please let us know and we will get back to you as soon as possible with additional clarifications.\"}", "{\"title\": \"Answer to Weakness 1\", \"comment\": \"On-Time Comparison, Error Bars, Randomness, Cost-Quality Trade-Off, Reconstruction versus Downstream Measures, KernelPCA:\\n\\n1. **DeepCAE Trade-Off**: To assess the trade-off associated with DeepCAE, we added Tables 14 and 15 in Appendix A.4 with a comparison of runtimes in seconds for training DeepCAE, StackedCAE, StandardAE and KernelPCA as well as a brief discussion at the end of the results section at line 460. Also, note that the difference in performance between DeepCAE and StandardAE looks very small in Figure 2 due to the logarithmic scaling, but is more than 10%. For your convenience we also include the paragraph on training time here as well: We observe that average training time across all 13 datasets in our benchmark for DeepCAE is about 6 minutes, while it is only about 3.5 minutes for StackedCAE. The median training time is about 2.5 minutes for both, which confirms that DeepCAE scales worse than a comparable StackedCAE. StandardAE is the fastest to train on average, while PCA takes longer than others on larger datasets. For many real-world applications, training times of a few minutes are negligible in the trade-off even for small performance improvements, making DeepCAE the preferred choice.\\n\\n2. **Error Bars**: We added error bars representing the 95% confidence interval for the comparison by reconstruction performance in Figures 2 and 5. The error bars clearly show how DeepCAE outperforms in both comparisons under this 95% confidence interval. In Figure 5, the error bars are very small but non-zero.\\n3. [Random Seeds] Regarding your comment on variation and seeds in our experiments:\\n - For the exact implementation, please refer to our code provided in the supplementary material - we did our best to document it well.\\n - We fixed both the random seeds for splitting the dataset into train and test set, and for validation set for hyperparameter optimization (HPO). This is to keep the split between training and HPO consistent and avoid leakage.\\n - The random seeds for the training and test dataloaders are not fixed to enable Stochastic Gradient Descent (SGD) leading to some variability in the results. For the inference dataloaders (i.e., for embedding generation) the data is not shuffled to isolate the randomness of the downstream modeling for accurate uncertainty reporting.\\n - The seed for model initialization is not fixed to account for parameter initialization in our results. The reported performance in our experiments is the mean of three runs (with different random initialization and shuffled training batches).\\n - The random state of the downstream XGBoost model is not fixed as well to account for different initializations and report downstream modeling uncertainty. The random state is derived from the system clock or another entropy source internally.\\n - As you pointed out, we discuss if the improvements brought by DeepCAE are worth the additional cost and training time at line 460 et seq. at the end of the results section 5. The trade-off between improved embedding quality and cost in terms of time depends on the use case and is hard to be generalized, which is why we do not discuss it further in the paper.\\n4. **Reconstruction Loss**: Reconstruction loss directly quantifies the amount of original information preserved in the embedding. It is important to define a way to evaluate the quality of embeddings for a general-purpose tabular autoencoder, with limited or without downstream tasks to test on. If the decoder can accurately reconstruct the input from the embedding (low reconstruction loss), we can conclude that the embedding retains most or all of the essential information from the original input, making it a good general representation. Following your question, we updated the paper to include this reasoning more explicitly in Table 15 in Appendix A.4.\\n5. **KernelPCA**: KernelPCA has a cubic runtime complexity with respect to the number of data points, making it less suitable for larger datasets (see Table 14). In addition to the weak reconstruction performance, (1) it comes with higher runtime for large datasets, also due to a lack of GPU support, (2) we are not as flexible in encouraging certain properties of the latent space, such as robustness against noise (as possible with DeepCAE) and (3) the non-linear functions can be learned, whereas KernelPCA requires prior definition of the kernel.\"}", "{\"metareview\": \"This paper proposes DeepCAE for learning general-purpose entity embeddings. Although DeepCAE extends from the contractive autoencoder, the authors provide a more effective design in calculating the multi-layered regularization term. Empirically over tabular benchmarks, the authors show DeepCAE can be leveraged in a general purpose embedding framework where embeddings are feed to XGBoost to obtain gains in reconstruction performance and comparable/slightly better performance downstream prediction performance as compared with various AutoEncoders and Transformer baselines.They additionally show the noteable reconstruction performance of DeepCAEs compared with Stacked CAEs.\\n\\nThe reviewers generally agree on the sufficient experiments in this work. On the other hand, consistent concerns on (1) self-containing and logical coherence of this paper, (2) quite many presentation issues, (3) significance of the improvement over the original CAE.\", \"additional_comments_on_reviewer_discussion\": \"The aforementioned concerns are still shared among the reviewers.\"}", "{\"title\": \"Answer to Weakness 6\", \"comment\": \"The original CAE paper (https://icml.cc/2011/papers/455_icmlpaper.pdf) compares the single-layer CAE to StackedCAE in Table 2, and shows CAE with multiple layers outperforms single-layer CAE. Therefore, we took StackedCAE as our baseline.\"}", "{\"summary\": \"This paper proposes DeepCAE for learning general-purpose entity embeddings. Although DeepCAE extends from the contractive autoencoder, the authors provide a more effective design in calculating the multi-layered regularization term. Extensive experiments across 13 datasets demonstrate state-of-the-art performance of DeepCAE on reconstruction and \\u200cdownstream\\u200c prediction tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The extension of CAE for multi-layer setting is simple yet effective.\\n\\nThe \\u200cauthors conduct\\u200c experiments across 13 datasets and \\u200ccover\\u200c various types of entities.\\n\\nThe results demonstrate state-of-the-art performance on both reconstruction and \\u200cdownstream\\u200c prediction.\", \"weaknesses\": \"The main paper should be self-contained. The authors may overly refer to the \\u200coriginal\\u200c CAE paper.\\n\\nThe motivation of DeepCAE and CAE is not clearly introduced. It is \\u200cconfusing\\u200c for me why \\u200cthey are designed for tabular data, and how \\u200cthey are connected\\u200c.\\n\\nIn the experimental results, the strengths of DeepCAE are not significant compared with the standard AE.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Invitation to Discussion\", \"comment\": \"We are approaching the end of the discussion period. We did our best to address all questions and weaknesses with data-driven insights to enrich our work. Unfortunately, despite our efforts, we have not yet been able to fully convince you of our proposed framework. If there is anything we can do to further your assessment of the paper and help you better comprehend any necessary details of CAE, please let us know. We will get back to you as soon as possible with additional clarifications. Thank you again for your questions and suggestions that contributed to the enhancement of our work\\u2019s presentation.\"}", "{\"comment\": \"*References to lines, tables, figures correspond to the new revision, posted together with this answer.*\\n\\nThank you for your review, and for highlighting strengths and weaknesses of our work that we reviewed carefully to improve our work.\\n\\nWe appreciate you highlighting opportunities for improvements on soundness and presentation, and would like to gently ask you to expand on these so that we can make corrections where necessary, and improve presentation aspects such as formatting and readability, especially considering you are fairly confident with your assessment.\\n\\nWe kindly ask you to review our detailed answers below, and review your scores accordingly.\\n\\n### Answers to Weaknesses and Questions\\n\\n[Self-containedness] You rightly noted that our work extends the original CAE framework to multi-layer settings. Finding the right balance between introducing CAE in detail for a self-contained paper (which often means repeating information from the original CAE paper) and focusing on our extension and its benefits is very difficult and highly subjective. Our approach is to introduce CAE (see section 2.1) with all important properties and refer to the original CAE paper where even the most interested readers can learn all the details. The main motivation for using a CAE is the robustness against small perturbations of the input (i.e., noise) to still capture the essence of the input. Following your feedback, we included a geometrical explanation of the contractive effect to deepen the reader's understanding of contractive autoencoders and thereby make the paper more self-contained. \\n\\n[Application to tabular data] CAEs are not particularly designed for tabular data. Rather, they are designed to learn representations of any suitable input. Our work is mainly motivated by learning representations of entities stored in tabular format (cf. the paragraph starting in line 47). Hence, we benchmark it on tabular data. Theoretically, it could also work very well for image representation learning. However, there are more tailored approaches for image representation learning (such as VAEs or ViTs), which is why we did not focus on this.\\n\\n[Motivation of DeepCAE and CAE] Thank you for pointing that out, we clarify the motivation for DeepCAE and CAE in Section 1, line 087 et seq. as well as Section 2.1. line 97 and line 114 et seq. There we point out that the main motivation for using a CAE architecture (both DeepCAE and the original CAE) for embedding tabular data is its robustness against noise in the form of small perturbations of the input while still capturing the essence of the input. In essence, the CAE penalty term is a form of regularization that can improve generalization. \\n\\n[How are DeepCAE and CAE connected] DeepCAE, StackedCAE, and original CAE all use the contractive term (Frobenius Norm of the encoder\\u2019s Jacobian) in the loss computation. However, vanilla CAE only has one layer, while the others have more than one. Moreover, while StackedCAE computes the contractive term for each layer independently, our method DeepCAE takes the more holistic approach of computing the contractive term for the whole network jointly, following CAE\\u2019s original design.\\n\\nWe updated the comparison figures with error bars to indicate this in the paper. In the experimental results, the performance difference between DeepCAE and StandardAE is significant under a confidence interval of 95% (cf. Figure 2). Note that Figure 2 is in logarithmic scale, which visually conveys a small improvement between DeepCAE and StandardAE. However, the performance improvement of DeepCAE is about 10%. We specified the logarithmic scale of the x-axis in the paper as well.\"}", "{\"title\": \"Answer to Weakness 3\", \"comment\": \"Thank you for pointing that out. Our work indeed is not limited to DeepCAE. The end-to-end framework functions as a versatile framework for generating embeddings and solving downstream tasks. By integrating various embedding models into the pipeline while keeping all other components constant, we ensure that comparisons focus solely on differences in model families and architectures. We clarified this in the main text. We hope this will strengthen the impact of our work.\"}", "{\"title\": \"General Response & Answers to Weaknesses and Questions\", \"comment\": \"Thank you for your review, which highlighted the strengths of our paper and opportunities for improvement.\\n\\nFrom your review and summary we are surprised that the confidence level you provided corresponds to not being able to assess our work. We are open to provide further clarifications in case anything is not clear, and to reflect these in our work for further improvement.\\n\\n### Answer to Weaknesses and Questions\\n\\nRegularization methodologies like dropout or data augmentation are indeed effective to further prevent overfitting, however, to allow a fair comparison between model families and architectures, we intentionally omitted such dimensions, leading to clear conclusions. Future work could dive deeper into this aspects, to provide a comprehensive analysis of regularization methodologies in the context of autoencoders.\"}", "{\"title\": \"General Response\", \"comment\": \"*References to lines, tables and figures correspond to the new revision, posted together with this answer.*\\n\\nWe thank you for the detailed review. We updated the main text for more focused and self-contained content, as also pointed out by other reviewers. We believe these improvements will greatly benefit the soundness and presentation of the contribution, while clarifying and strengthening the work.\\n\\nWe kindly ask you to review our answers below, where we mention the improvements we made to the paper also in terms of soundness and presentation, and review your scores accordingly. We are available for further clarifications as needed.\"}", "{\"title\": \"Answers to Questions\", \"comment\": \"1. The difference in complexity between DeepCAE and the other autoencoders leads to noticeable differences in relative runtimes as expected. KernelPCA scales even worse empirically. Find details in Appendix 4.2 and in the aggregated table below (numbers are seconds of training time).\\n| Model | Mean | Sum | Median |\\n|-------|------|-----|--------| \\n| Our Model | 379.898 | 5318.575 | 163.092 | \\n| PCA | 610.688 | 8549.637 | 43.226 |\\n| StackedCAE | 209.424 | 2931.939 | 166.812 |\\n| StandardAE | 144.487 | 2022.811 | 105.452 |\\n\\n2. We changed to the 2017 version, thank you for pointing this out.\\n3. The choice of using the $\\\\tanh(x)$ activation function is motivated by the objective of easing the processing of negative input values thanks to the activation function\\u2019s output in [-1, 1], differently than ReLU and sigmoid. Moreover, the $\\\\tanh(x)$ activation function also comes with a convenient derivative that is built from the output of the function, which makes derivations in Section 4.1 significantly easier.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"General Response\", \"comment\": \"*References to lines, tables, and figures correspond to the new revision, posted with this answer.*\\n\\nThank you for your detailed review, highlighting the strengths of our paper and opportunities for improvement. We improved the paper where necessary following your questions and suggestions, and provide detailed responses below with references to such improvements in the paper.\\n\\nWe kindly ask you to review our answers below and review your scores accordingly. We are available for further clarifications as needed.\"}", "{\"title\": \"Reply to Reviewer Comment\", \"comment\": \"Thank you for your valuable feedback. Thank you once more for your questions and suggestions that contributed to the enhancement of our paper.\\n\\nSadly, the deadline for updating the paper has passed before your reply with new suggestions, such that we cannot include them. However, we want to note that the log scale is now explicitly mentioned in the caption of Figures 2 and also 5. Please also note that we do actually mention the time/performance trade-off in the main paper starting in line 460 at the end of Section 5.\"}", "{\"title\": \"Answer to Weakness 5\", \"comment\": \"This point was also highlighted by other reviewers, thank you. We condensed the sections on VAE and Transformers into Preliminaries, and moved the explanation on how we adapted the Transformer architecture for our experiments to the Experiments Section 5.\"}", "{\"title\": \"Answers to Questions 2\", \"comment\": \"11. We updated Section 5 to include details on the experiment settings such as the number of layers (2) and the hidden dimensions (50% of the input dimension) more prominently. We have chosen to include the tables with dataset statistics in Appendix A.1, as they are not directly relevant to the aggregated experimental results discussed in Section 5. Additionally, their inclusion in the main body would occupy significant space, limiting the presentation of other critical details.\\n12. Previous work tends to not compare autoencoder performance based on reconstruction quality at all, hence MSE was not used there. We opted for MSE as a well-known metric for comparing continuous outputs and targets. It is computed over a series of $n$ predictions $\\\\hat y$ and targets $y$ as $MSE(\\\\hat y, y) = \\\\frac 1n \\\\sum ^n_{i=1} (\\\\hat y_i - y_i)^2$.\\n13. The term \\u201cdownstream applications\\u201d in Section 5.1.2. refers to machine learning tasks that use the generated embeddings, instead of the original input, as input to predict a related target, as represented in Figure 1. We decided to use XGBoost for these downstream applications in our benchmark, as it\\u2019s not only commonly known and used in the ML community but is also simple and performant. Hence, we assumed no further introduction was needed. However, following your feedback we updated section 5.1.2 by providing additional context on downstream applications and on why we chose XGBoost for downstream tasks.\\n14. Transformers and CNNs are designed to augment the input data in certain ways, which is also why they are so performant in their respective domains (e.g. the attention mechanism in Transformers that correlates different parts of the input sequence to extraction relations between sequence features). However, from our results in Section 5, we observe that both CNN-based and Transformer-based autoencoders do not work well for our setting. From that, we speculate that the augmentative capabilities of these models are not helpful in generating general-purpose entity embeddings. So the evidence of our claim is our results. These are empirical insights on a comprehensive benchmark, and there is room for future work that could try to understand this more in detail. However, since this work is mainly concerned with finding the best entity embedding model, we do not focus on this.\"}", "{\"title\": \"Thank the authors for their replies\", \"comment\": \"I have read your replies and the revised paper. As a small suggestion for next time, highlighting the revised text in color would make it easier for readers to track the changes.\\n\\nHonestly, I am not very familiar with CAE model, so the presentation still feels unclear to me from a technical perspective, but I may overlook the significance of DeepCAE' algorithmic contribution. \\n\\nI also agree with reviewer PZNQ that the experimental improvement of DeepCAE is not significant.\"}", "{\"summary\": \"This paper introduces DEEPCAE, a versatile entity embedding framework based on autoencoders. By extending contractive autoencoders (CAE) to a multi-layer structure and preserving the original regularization design, DEEPCAE enhances both reconstruction accuracy and downstream prediction performance for complex entity embeddings. In tests across 13 datasets, DEEPCAE consistently outperformed other autoencoder variants in both reconstruction error and predictive tasks, achieving a 34% reduction in reconstruction error compared to a stacked CAE. This framework offers an efficient, scalable solution for general-purpose entity embeddings across diverse domains, ultimately reducing time spent on feature engineering and boosting model accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper introduces DEEPCAE, a multi-layer contractive autoencoder designed for general-purpose entity embedding. By extending the contractive autoencoder framework to multiple layers while preserving regularization, DEEPCAE overcomes limitations seen in stacked CAEs, opening new possibilities for autoencoders with high-dimensional data. The study is thorough, with DEEPCAE evaluated across 13 diverse datasets, showing consistently strong results in both reconstruction and downstream tasks that highlight its effectiveness. The paper is well-organized, with clear derivations and detailed appendices on model architecture and hyperparameters to ensure reproducibility. Overall, DEEPCAE offers an efficient, versatile solution for embedding across domains, reducing feature engineering time and adding practical value for cross-application embeddings in industrial settings.\", \"weaknesses\": \"DEEPCAE demonstrates impressive results with contractive regularization, but it may not have explored other well-established regularization techniques, like dropout or data augmentation, which are effective in preventing overfitting. It would be beneficial for the authors to consider incorporating these strategies into the DEEPCAE framework. Doing so could enhance the model's robustness, and evaluating their impact on performance in future experiments would provide valuable insights into improving its effectiveness.\", \"questions\": \"In the existing regularization strategies, have other methods been considered, such as dropout or data augmentation? These techniques have been proven effective in preventing overfitting.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewer reply to rebutall\", \"comment\": \"Thank you to the authors for the clarifications and additions to the paper.\\nThe error bars, time comparisons and clarification on the log scale nature of figure 2 improved the strength of the paper in my view, although i think it would behove the authors to fix figure 2 to not be in log scale so as avoid this confusion. I also think adding mention of the time/tradeoffs in the main paper and then pointing to the time/tradeoffs ( either rounded to the second or at most one decimal point as opposed to three ) in the appendix will also help. In lieu of this I have updated my score.\"}" ] }
0BujOfTqab
AdvWave: Stealthy Adversarial Jailbreak Attack against Large Audio-Language Models
[ "Mintong Kang", "Chejian Xu", "Bo Li" ]
Recent advancements in large audio-language models (LALMs) have enabled speech-based user interactions, significantly enhancing user experience and accelerating the deployment of LALMs in real-world applications. However, ensuring the safety of LALMs is crucial to prevent risky outputs that may raise societal concerns or violate AI regulations. Despite the importance of this issue, research on jailbreaking LALMs remains limited due to their recent emergence and the additional technical challenges they present compared to attacks on DNN-based audio models. Specifically, the audio encoders in LALMs, which involve discretization operations, often lead to gradient shattering, hindering the effectiveness of attacks relying on gradient-based optimizations. The behavioral variability of LALMs further complicates the identification of effective (adversarial) optimization targets. Moreover, enforcing stealthiness constraints on adversarial audio waveforms introduces a reduced, non-convex feasible solution space, further intensifying the challenges of the optimization process. To overcome these challenges, we develop AdvWave, the first jailbreak framework against LALMs. We propose a dual-phase optimization method that addresses gradient shattering, enabling effective end-to-end gradient-based optimization. Additionally, we develop an adaptive adversarial target search algorithm that dynamically adjusts the adversarial optimization target based on the response patterns of LALMs for specific queries. To ensure that adversarial audio remains perceptually natural to human listeners, we design a classifier-guided optimization approach that generates adversarial noise resembling common urban sounds. Extensive evaluations on multiple advanced LALMs demonstrate that AdvWave outperforms baseline methods, achieving a 40\% higher average jailbreak attack success rate. Both audio stealthiness metrics and human evaluations confirm that adversarial audio generated by AdvWave is indistinguishable from natural sounds. We believe AdvWave will inspire future research aiming to enhance the safety alignment of LALMs, supporting their responsible deployment in real-world scenarios.
[ "jailbreak", "adversarial attack", "audio-language model" ]
Accept (Poster)
https://openreview.net/pdf?id=0BujOfTqab
https://openreview.net/forum?id=0BujOfTqab
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zeiHMdeVGT", "zDWl10CJYP", "xgRgGEe8GC", "v8dY5R2A1d", "uHFbP0JHYO", "u34frkNFK5", "snj9cywTSF", "qbLFWe0J5m", "qHmFcWyiZ5", "ndfpkyo6Eb", "mNuBGOQlGI", "jFLm3zkpGm", "bnhUC7w69G", "X55j5UcfYQ", "PcTPb1NHa0", "NnfJeATAPJ", "KvoDpLaQCk", "G96AbYLt7u", "E3LuMtYgB2", "DqTo8xqM7j", "ByUe51Napi", "BSDnl86PjP", "AOQzUZcbWc", "7Hqs7ZaThk", "66nqLJ35Mi", "2ruQAMNZ0Q" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732640671050, 1732515243677, 1730624631042, 1733110273393, 1732515675129, 1732686317309, 1732640631026, 1732640562088, 1732537312403, 1732514190180, 1732519422373, 1730468766723, 1732515876173, 1737524091215, 1735060998996, 1732722432278, 1729429291334, 1732515543445, 1732524730275, 1733246086412, 1733246110647, 1732602096275, 1733204916140, 1732513679282, 1732640599937, 1730697159497 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_vSXc" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_Eqtj" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_7mqQ" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_cSgt" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_7mqQ" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10908/Area_Chair_T3yh" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_vSXc" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_cSgt" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_vSXc" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_Eqtj" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_Eqtj" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Authors" ], [ "ICLR.cc/2025/Conference/Submission10908/Reviewer_Eqtj" ] ], "structured_content_str": [ "{\"title\": \"Follow-up discussion with Reviewer cSgt\", \"comment\": \"We appreciate the reviewer\\u2019s feedback, which has prompted us to reconsider our threat model\\u2014specifically, the approach of adding perturbations or appending a suffix. We acknowledge that the references provided focus on adversarial attacks on ALMs rather than jailbreaks. However, jailbreak literature on LLMs (e.g., [1,2,3,4]) predominantly adopts the strategy of appending a suffix to the original query while preserving its semantics.\\n\\nThis preference stems from the need to maintain the query\\u2019s original meaning, thereby avoiding false positive jailbreaks. For instance, in the example provided in our rebuttal, altering \\u201cgun\\u201d to \\u201cwater gun\\u201d detoxifies the query and results in a false positive jailbreak. Consequently, we maintain that appending a jailbreak suffix is a more appropriate and reliable threat model for jailbreaking ALMs.\\n\\n\\n[1] Guo, Xingang, et al. \\\"Cold-attack: Jailbreaking llms with stealthiness and controllability.\\\" ICML 2024.\\n\\n[2] Sadasivan, Vinu Sankar, et al. \\\"Fast Adversarial Attacks on Language Models In One GPU Minute.\\\" ICML 2024.\\n\\n[3] Qin, Yao, et al. \\\"Imperceptible, robust, and targeted adversarial examples for automatic speech recognition.\\\" International conference on machine learning. PMLR, 2019.\\n\\n[4] Zou, Andy, et al. \\\"Universal and transferable adversarial attacks on aligned language models.\\\" arXiv preprint arXiv:2307.15043 (2023).\"}", "{\"title\": \"Response to Reviewer Eqtj\", \"comment\": \"We appreciate the reviewer's thoughtful feedback on our paper. Below, we included additional comments to further improve our work.\\n\\n> Q1: The Transfer baselines are not strong.\\n\\nThank you for the valuable feedback! We select the three transfer-based baselines since they all do not need gradient information and show variability in the readability of jailbreak prompts that may affect cross-modality transferability. We further add another strong baseline PAIR attack [1] which leverages a red-teaming LLM to refine the jailbreak suffix with black-box feedback of audio-language models. We compare PAIR and AdvWave in Table A. The results show that AdvWave still outperforms PAIR significantly, highlighting the effectiveness of white-box optimization with AdvWave.\", \"table_a\": \"ASR-W/ASR-L of PAIR attack and AdvWave attack on different audio-langauge models.\\n\\n| | SpeechGPT | Qwen2-Audio | Llama-Omni |\\n| - | - | - | - |\\n| PAIR | 0.064 / 0.013 | 0.462 / 0.362 | 0.753 / 0.578 |\\n| AdvWave | 0.643 / 0.603 | 0.891 / 0.884 | 0.981 / 0.751 |\\n\\n[1] Chao, Patrick, et al. \\\"Jailbreaking black box large language models in twenty queries.\\\" arXiv preprint arXiv:2310.08419 (2023).\\n\\n> Q2: More details on the human study and stealthiness score computation.\\n\\nThank you for your question. The process for human evaluation of the stealthiness of adversarial audio is designed to assess how imperceptible the adversarial modifications are to a listener. Specifically, three domain experts are instructed as follows: \\u201cYou will be presented with two audio clips: the first is the original audio, and the second is its adversarially modified version. Please rate how likely the second audio clip (adversarial audio) introduces only natural background noise as opposed to significant distortions or unnatural artifacts compared to the original audio. Your rating should reflect this likelihood on a scale from 0 to 1, where 0 means 'completely unnatural or obviously manipulated,' and 1 means 'indistinguishable from natural background noise.'\\u201d Therefore, the human evaluation scores are also bounded, so that the score combinations with similarity scores is also reasonable. We will include more details in the final manuscript.\\n\\nThank you for the suggestion! We will adopt the concept signal-to-noise ratio in our revision.\\nFor the spectrogram similarity metric, we expand the mel spectrogram matrix to a vector and then compute cosine similarity. With this metric, we expect that it would be intensity invariant since the intensity stealthiness is already reflected in signal-to-noise ratio scores. We aim to evaluate the shape similarity of waveforms so that cosine similarly would be a better choice.\\n\\n\\n> Q3: Presentation issues.\\n\\nThank you for the thoughtful comment! The term \\\"gradient shattering\\\" originates from [5], where it is defined as nonexistent or incorrect gradients caused either intentionally through non-differentiable operations or unintentionally due to numerical instability.\\nMoreover, in the revised manuscript, we rename \\\"retention loss\\\" as \\\"alignment loss\\\" and refine Equation 2 to emphasize that we only refine a suffix.\\n\\n[5] Athalye, Anish, Nicholas Carlini, and David Wagner. \\\"Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples.\\\" International conference on machine learning. PMLR, 2018.\"}", "{\"summary\": \"The authors introduce a novel jailbreak framework for optimising audio jailbreaks against audio-language models (ALMs). They overcome challenges in the design of ALMs: namely 1.) they find a dual-phase training process so they can optimise attacks even through discretisation operations in the tokeniser, and 2.) they develop an adaptive search method to find more flexible adversarial targets.\\n\\nFinally, the authors introduce a realistic constraint on their work: that the audio jailbreaks are stealthy. They operationalise this as having human and ALM-based classifiers independently score the audio input for signs that it was adversarially tampered-with. The authors claim (it's hard without hearing audio samples myself) that their jailbreaks are hence indistinguishable from normal urban noise (e.g. a car horn).\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper tackles a important and as yet unaddressed issue in jailbreak literature, and does so with sensitivity to realism. I am particularly impressed with the authors' operationalisation of stealthiness as urban noise (pending audio samples that I can listen to when the paper is out). The authors' use of human judgment to verify and counterweight their classifier (itself a potentially valuable contribution to audio-jailbreak defense) strengthens my confidence in these results even if I can't aurally verify them myself.\\n\\nThe results of their optimisation are strong. Their ASR results are comparable to or exceed the other audio-jailbreak papers I know of that were submitted to ICLR.\\n\\nThe methods developed to optimise jailbreaks against audio-models, given the challenges the authors list, are valuable and novel contributions themselves. In particular the method for adversarial optimisation target search seems to me to strengthen the jailbreak method over the baselines they test against. For example, GCG is reasonably well-known for optimising for outputs such as \\\"Sure,\\\" even if immediately followed with \\\"I can't help with that.\\\" The adaptivity and greater detail of the jailbreak targets listed in the appendix seem to me to increase the likelihood that jailbreaks listed as successful in this paper do in fact contain jailbroken information. I'm also given increasing confidence in the evaluations of this paper by the authors' use of both a word-based classifier that detects refusal strings, and an LLM graded response.\", \"weaknesses\": \"While I'm overall very positive on this paper, I'm a little underwhelmed by the baselines. I would expect that the adversarial perturbations of GCG and BEAST to be quite brittle to being converted to spoken text and then fed into an ALM. These are worthwhile baselines to run, but more semantically-natural baselines like AutoDAN would have pushed the paper even further. The authors acknowledge the difficulty and novelty of introducing audio-based adaptive attacks, like transfers of PAIR or TAP: I would have been very excited to see the authors tackle adaptive jailbreaks in the audio domain, but understand why for reasons of cost and difficulty that this might not be feasible - though I am aware of an unpublished audio implementation of PAIR.\\n\\nI think Fig 1 is quite challenging to parse. I would rather it be simplified quite a lot more before final release. In particular, I think there is too much text annotating Phase II, even if helpful for diving deeper into the method. I would prefer at least a much more abstracted version of the figure, without reference to variables from Equation 1, and with the annotation retooled to explain how the different branches refer to each other. At the moment I think it's too hard to understand without continuous reference to Equation 1, and the figure struggles to explain itself on its own.\", \"questions\": \"1. Did you try other audio-transcribed jailbreak classes, including more naturalistic text like in Zheng et al's persuasion paper? [1]\\n2. What made you think GCG and BEAST were strong baselines when translated into audio? \\n3. Did you attempt your jailbreaks on any versions of Gemini or 4o? To my understanding some of the more capable models are only trained to recognise speech data - which would presumably make your noise perturbations less effective?\\n4. Who were the humans judging your stealthiness? was there a more detailed rubric you can share?\\n\\n\\n\\n\\n[1] https://arxiv.org/abs/2401.06373\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the suggestion! We have added the results of applying PAIR exclusively on text modalities, using the jailbreak text as input to the audio-language models. These results, presented in Table A, demonstrate that AdvWave consistently outperforms text-only attacks. This is attributed to its ability to manipulate the audio space, which offers greater complexity compared to the discrete token space. Additionally, we will include text-only attack results for other baselines (GCG, BEAST, AutoDAN) in the final version of the paper.\", \"table_a\": \"ASR-W/ASR-L of PAIR (Audio), PAIR (Text), and AdvWave attack on different audio-langauge models.\\n\\n| | SpeechGPT | Qwen2-Audio | Llama-Omni |\\n| - | - | - | - |\\n| PAIR (Audio) | 0.064 / 0.013 | 0.462 / 0.362 | 0.753 / 0.578 |\\n| PAIR (Text) | 0.152 / 0.163 | 0.632 / 0.602 | 0.796 / 0.683 |\\n| AdvWave | 0.643 / 0.603 | 0.891 / 0.884 | 0.981 / 0.751 |\"}", "{\"title\": \"Response to Reviewer 7mqQ (Part 2)\", \"comment\": \"> Q4: Lack of clarity on LALM architecture differences.\\n\\nThank you for the thoughtful comment! The term \\\"gradient shattering\\\" originates from [5], where it is defined as nonexistent or incorrect gradients caused either intentionally through non-differentiable operations or unintentionally due to numerical instability. We acknowledge that not all LALMs suffer from gradient shattering. In cases where this issue is absent, adaptive adversarial target search and classifier-guided stealthiness control can be directly applied in an end-to-end optimization framework.\\nThat said, when gradient shattering presents additional technical challenges for jailbreak, our dual-phase optimization approach serves as a solution. The dual-phase optimization in AdvWave can be viewed as a specific instance of alternating direction methods [6], where different variables are optimized alternately to facilitate optimization.\\nAdditionally, we clarify that since all LALMs output text, the adversarial loss in AdvWave is applied to the text modality, making our framework broadly applicable to all LALMs. We have incorporated these discussions in Section 3 of the revised manuscript.\\n\\n[5] Athalye, Anish, Nicholas Carlini, and David Wagner. \\\"Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples.\\\" International conference on machine learning. PMLR, 2018.\\n\\n[6] Boyd, Stephen, et al. \\\"Distributed optimization and statistical learning via the alternating direction method of multipliers.\\\" Foundations and Trends\\u00ae in Machine learning 3.1 (2011): 1-122.\\n\\n> Q5: More clarifications on the threat model.\\n\\nThank you for the question! We have added clarifications regarding the importance of enforcing stealthiness in response to Q2. We consider both white-box and black-box optimization to be practically significant. As the widespread deployment of large models continues, it becomes increasingly challenging to centralize these models on a single server. For example, deploying a model on a mobile device effectively creates a white-box scenario for users. Therefore, we argue that studying white-box jailbreaks against LALMs is a meaningful research direction to inform and enhance the future alignment of these models.\\n> Q6: Presentation issues.\\n\\nThank you for the suggestions! We improved the presentation in our revision based on the comments.\\n\\n> Q7: Missing related work.\\n\\nThank you for the suggestion! We added a literature review on VLM jailbreak and adversarial attack on STT model in Section 2.\\n\\n> Q8: Adaptive target search seems complicated.\\n\\nThank you for the insightful comment. We acknowledge that prior work, such as [1], employs manually designed adversarial optimization targets, which can be labor-intensive and time-consuming. To address this, we propose an adaptive target search algorithm comprising three key components: object detoxification, response collection, and pattern summarization. While the algorithm introduces the overhead of three additional model forward passes, this is minimal compared to the thousands of forward passes required for optimization, making it both efficient and worthwhile. We have incorporated this discussion into the revised manuscript.\\n\\n[1] Andriushchenko, Maksym, Francesco Croce, and Nicolas Flammarion. \\\"Jailbreaking leading safety-aligned llms with simple adaptive attacks.\\\" arXiv preprint arXiv:2404.02151 (2024).\\n\\n> Q9: Cross-model transferability evaluation.\\n\\nThank you for the insightful comment! We evaluate the attack success rates (ASR) of adversarial audio optimized on white-box audio-language models when tested on a Realtime API using the Advbench-Audio dataset. The results, presented in Table B, indicate that the transferability of the optimized audio is limited. This may stem from significant discrepancies in audio encoders, model architectures, and other underlying factors. Developing more transferable attack methods for audio-language models remains an open direction for future research.\", \"table_b\": \"ASR-W and ASR-L of adversarial audio optimized on white-box audio-language models on Realtime API on Advbench-Audio dataset.\\n\\n| Source model | SpeechGPT | Qwen2-Audio | Llama-Omni |\\n| - | - | - | - |\\n| ASR-W | 0.205 | 0.157 | 0.057 |\\n| ASR-L | 0.046 | 0.024 | 0.023 |\\n\\n\\n\\n> Q10: Naming of ALM.\\n\\nThank you for the suggestion! We assume ALM is indeed better to understand and memorize. We adopted the naming of ALM in the revision.\"}", "{\"comment\": \"Regarding the PAIR attack and the transfer baselines. I have to reiterate that I suggested showing the success rate of *text-only* attacks as an additional baseline. From the authors' description, it seems that they have used PAIR in the transfer setup.\"}", "{\"title\": \"Follow-up discussion with Reviewer 7mqQ\", \"comment\": \"We thank the reviewer\\u2019s valuable feedback!\\n\\n> Concerns regarding the stealthiness enforcement.\\n\\nWe acknowledge that most of the jailbreak literature focuses on \\\"misuse\\\" as the primary threat model, where stealthiness is not a significant concern. Only a few follow-up studies have explored enhancing the fluency or readability of jailbreak prompts as secondary benefits to bypass specific guardrails. Consequently, in the revised version of our paper, we will de-emphasize the stealthiness constraint in the threat model and methodology sections. Instead, we will emphasize that the AdvWave framework is flexible and can incorporate additional stealthiness constraints as by-products when specific noise detection mechanisms are in place.\\n\\n> ALM architecture discussion.\\n\\nThanks for the comment! We highlight the application of AdvWave on ALMs with different architectures in Section 3.5 in the current version.\"}", "{\"title\": \"Follow-up discussion with Reviewer Eqtj\", \"comment\": \"We thank the reviewer for the additional valuable comments!\\n\\n> Clarification of PAIR attack in Table A.\\n\\nThe PAIR attack represents an adaptive black-box attack against ALMs. Specifically, we employ the PAIR framework, leveraging feedback directly from ALMs. In this process, the red-teaming LLM iteratively refines jailbreak prompts based on feedback from the black-box ALMs. While a TTS model is used to convert text into audio, the feedback originates directly from the target ALMs, ensuring that this approach serves as a strong adaptive baseline.\\n\\n> Clarifications on the mel-spectrum Cosine similarity.\\n\\nOur choice of Cosine similarity over L2-distance emphasizes the similarity in the shape of the waveform while disregarding variations in magnitude or phase. This approach aligns with the principles applied in the literature on [speaker identification](https://www.iosrjournals.org/iosr-jce/papers/Vol26-issue1/Ser-1/C2601011926.pdf).\"}", "{\"comment\": \"Thanks for the additional experiments and paper edits.\\n\\n> This unnatural quality draws undue attention from human auditors and risks being flagged or filtered by noise-detection systems.\\n\\nI\\u2019m still not convinced by this. If there was a human auditor in the loop then they\\u2019d instantly know an adversary is misusing the ALM because they can hear the person asking for harmful information. As for noise detection systems, I\\u2019m not aware of any guardrails that exist currently that do this so trying to solve this problem doesn't make sense. If you want this as your motivation, then I think you need to implement your own noise detection system and verify that it works well without false positives on benign data. Personally, I think you should reframe your paper to care about \\u201dmisuse\\u201d as a threat model where stealthiness is not a constraint your algorithm needs to care about.\\n\\nThanks for supplying the supplementary material and your method reduces the noiseyness of the audio suffix well. However, I am still not convinced by the motivation.\\n\\n> We would like to clarify that the baselines (GCG-Trans, BEAST-Trans, and AutoDAN-Trans) optimize adversarial suffixes in text modalities, which are subsequently converted into corresponding audio suffixes using TTS models. As a result, the stealthiness scores of these baselines are not perfect (i.e., not 1.0). In contrast, vanilla generation achieves a perfect stealthiness score of 1.0 because it does not modify the original query.\\n\\nThis doesn\\u2019t make much sense to me as a metric because the baselines (e.g. AutoDAN-Trans) are just as stealthy as the vanilla generation in terms of audio quality. I don\\u2019t think you can claim they are less stealthy due to the content of the text being spoken, as the vanilla harmful request is not stealthy at all from a text perspective, either.\\n\\n\\n> We have incorporated [ALM architecture differences] discussions in Section 3 of the revised manuscript.\\n\\nI do not see a clear explanation of how different ALMs differ and which architectures your method works for. This is crucial, in my opinion.\"}", "{\"title\": \"Response to Reviewer vSXc\", \"comment\": \"We appreciate the reviewer's thoughtful feedback on our paper. Below, we included additional comments to further improve our work.\\n\\n> Q1: Baseline selection is not strong and lack motivations.\\n\\nThank you for the valuable feedback! We select the three transfer-based baselines since they all do not need gradient information and show variability in the readability of jailbreak prompts that may affect cross-modality transferability. We further add another strong baseline PAIR attack which leverages a red-teaming LLM to refine the jailbreak suffix with black-box feedback of audio-language models. We compare PAIR and AdvWave in Table A. The results show that AdvWave still outperforms PAIR significantly, highlighting the effectiveness of white-box optimization with AdvWave.\", \"table_a\": \"ASR-W/ASR-L of PAIR attack and AdvWave attack on different audio-langauge models.\\n\\n| | SpeechGPT | Qwen2-Audio | Llama-Omni |\\n| - | - | - | - |\\n| PAIR | 0.064 / 0.013 | 0.462 / 0.362 | 0.753 / 0.578 |\\n| AdvWave | 0.643 / 0.603 | 0.891 / 0.884 | 0.981 / 0.751 |\\n\\n> Q2: Improvement of Figure 1.\\n\\nThank you for the comment! We improve the clarity of Figure 1 following the suggestion.\\n\\n> Q3: Jailbreak attempts on Gemini and GPT-4o.\\n\\nThank you for the insightful comment! We evaluate the attack success rates (ASR) of adversarial audio optimized on white-box audio-language models when tested on a Realtime API using the Advbench-Audio dataset. The results, presented in Table B, indicate that the transferability of the optimized audio is limited. This may stem from significant discrepancies in audio encoders, model architectures, and other underlying factors. Developing more transferable attack methods for audio-language models remains an open direction for future research.\", \"table_b\": \"ASR-W and ASR-L of adversarial audio optimized on white-box audio-language models on Realtime API on Advbench-Audio dataset.\\n\\n| Source model | SpeechGPT | Qwen2-Audio | Llama-Omni |\\n| - | - | - | - |\\n| ASR-W | 0.205 | 0.157 | 0.057 |\\n| ASR-L | 0.046 | 0.024 | 0.023 |\\n\\n\\n> Q4: More details on human judge of stealthiness score.\\n\\nThank you for your question. The process for human evaluation of the stealthiness of adversarial audio is designed to assess how imperceptible the adversarial modifications are to a listener. Specifically, a group of domain experts are instructed as follows:\\n\\u201cYou will be presented with two audio clips: the first is the original audio, and the second is its adversarially modified version. Please rate how likely the second audio clip (adversarial audio) introduces only natural background noise as opposed to significant distortions or unnatural artifacts compared to the original audio. Your rating should reflect this likelihood on a scale from 0 to 1, where 0 means 'completely unnatural or obviously manipulated,' and 1 means 'indistinguishable from natural background noise.'\\u201d\"}", "{\"title\": \"Response to author\", \"comment\": \"While the authors argue that existing jailbreak attacks are predominantly achieved by appending suffixes, this is not entirely accurate [1]. Incorporating adversarial perturbations directly into the main body of the audio\\u2014rather than appending extra audio suffixes\\u2014is a widely adopted approach in current audio attacks [2-4]. This method allows attackers to modify the semantic content of the audio as a whole, thereby avoiding the need to add additional audio segments.\\n\\nThis approach also facilitates the realization of low-perturbation audio attacks, which are less likely to be detected by human auditors due to the subtle nature of the perturbations. In contrast, generating audio with high perturbation levels, such as noticeable appended suffixes, increases the likelihood of detection.\\n\\nGiven these points, the author\\u2019s response is unconvincing as it does not address these practical and commonly utilized techniques in audio adversarial attacks.\\n[1] Yang, Yijun, et al. \\\"Mma-diffusion: Multimodal attack on diffusion models.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n[2] Qin, Yao, et al. \\\"Imperceptible, robust, and targeted adversarial examples for automatic speech recognition.\\\" International conference on machine learning. PMLR, 2019.\\n[3] Yakura H, Sakuma J. Robust audio adversarial example for a physical attack[J]. arXiv preprint arXiv:1810.11793, 2018.\\n[4] Carlini N, Wagner D. Audio adversarial examples: Targeted attacks on speech-to-text[C]//2018 IEEE security and privacy workshops (SPW). IEEE, 2018: 1-7.\"}", "{\"summary\": \"This paper introduces AdvWave, a framework for conducting white-box adversarial attacks against large audio-language models (LALMs) to elicit harmful information. The authors identify the unique challenges posed by LALMs, such as gradient shattering due to discretisation operations in audio encoders and maintaining stealthiness constraints. To address these issues, AdvWave implements a dual-phase optimisation strategy. The first phase optimises a discrete latent representation to circumvent the gradient shattering issue, while the second phase adjusts the audio waveform itself to align closely with this representation while preserving perceptual naturalness. AdvWave significantly outperforms transferring static jailbreak attacks optimised on text-only LLMs that are subsequently vocalised with text-to-speech (TTS). The authors argue that their approach highlights the need for more robust defenses and safety measures in LALMs.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"**Relevant and timely approach**: LALMs are becoming more prevalent with the recent release of audio capabilities in Gemini and GPT-4o. However, to the best of my knowledge, AdvWave is the first work that has successfully got white-box jailbreak attacks to work.\", \"**Innovative approach**: AdvWave uses a dual-phase optimisation strategy to address the issue of not being able to backpropagate through the full network when it contains discretisation. They also improve optimisation efficiency by adaptively finding a target string that matches the common structure the model uses for benign requests. These challenges are clearly explained, and the authors provide solutions.\", \"**Potential for future research**: AdvWave opens several avenues for future work, including further exploration of defensive mechanisms and applying the framework to LALMs that do not have a discretization bottleneck.\"], \"weaknesses\": [\"**AdvWave is not the first approach to jailbreaking LALMs:** The authors of the paper claim that AdvWave is a novel framework for jailbreaking LALMs, but I think this claim needs to be softened throughout the paper. Gemini and GPT4o both have audio-specific jailbreaks, where they vocalise text jailbreaks with TTS in their recent model cards. Therefore, claiming that AdvWave is a novel white-box attack methodology for LALMs is better.\", \"**Stealthiness constraints are not well-motivated or measured:** It isn\\u2019t clear to me why this stealthiness constraint is needed. Any jailbreak, no matter the stealthiness of the input, is bad. Also, stealthiness constraints are not new; they were used in white-box attacks of speech-to-text (STT) models, and the intro doesn\\u2019t explain why LALMs make it more difficult. Papers such as \\u201cImperceptible, robust, and targeted adversarial examples for automatic speech recognition\\u201d should be cited. Did you ablate changing the whole audio file rather than just the suffix? What motivated the suffix and the environmental noises? You can have imperceptible changes to the audio without that, I believe. Also, what environmental classifier do you use? This needs to be cited for replication purposes.\", \"I\\u2019m very confused by the stealth metric and why it is a useful comparison to the baselines. The baselines do not have adversarial suffixes added on; they are just TTS reading a harmful prompt. So why isn\\u2019t their S_stealth equal to 1? It should be maximally stealthy, just like the vanilla baseline. Also, the baselines do not have a car horn at the end of the utterance, which could be considered more stealthy than your method. You mention you need high stealthiness so it is less detectable by real-world guardrail systems, but I don\\u2019t think the results presented demonstrate this. Also, AdvWave is not superior to vanilla in terms of stealth. The three terms in S_stealth are confusing and not well-motivated.\", \"**Lack of relevant adaptive black-box baselines:** The paper only compares attacks that are optimised on text-only LLMs that are then transferred into audio with TTS. Using TTS to vocalise GCG attacks might not make sense - there could be lots of tokens that can\\u2019t be spoken properly so I would expect the attack to be very weak. You say there are no adaptive attacks due to gradient shattering, but there are plenty of good adaptive black box attacks. I expect PAIR and TAP to work well in the audio domain. AdvWave should be evaluated against much stronger baselines than currently used. How strong are the transfer GCG/BEAST attacks to the base text-only LLM? E.g. what is the GCG ASR on Llama3? That would inform if the baselines transfer to the audio domain effectively or if they are broken by the architectural / fine-tuning differences.\", \"**Lack of clarity on LALM architecture differences, what architecture AdvWave is aimed at, and motivation for why dual optimisation to solve gradient shattering is needed:** Not all LALMs have discretisation of audio before input to an LLM (like SpeechGPT). Many insert a continuous vector directly into the embedding space of the model (e.g. DiVA, Llama3.1, Salmonn, Llasm, AudioQwen). Therefore, these won\\u2019t have the gradient shattering problem, and the framework in Figure 1 isn\\u2019t relevant. There needs to be better motivation and explanation of why AdvWave targets LALMs that have the discrete bottleneck. Ideally, the paper will explain all the different architectures and introduce a framework that works for all the variants. Also, many LALMs do not have a decoder that maps back to audio space. Lots just do audio-to-text. Only a few models are fully speech-to-speech (some are trained end-to-end, and others just put a TTS module on end). It is important to talk about these. Furthermore, why can\\u2019t you use a straight-through estimator or Gumbel softmax to differentiate through the discretisation instead of the dual optimisation approach? I need more motivation to believe this is necessary.\", \"Also, is gradient shattering a well-known term? A quick search gets something different: https://arxiv.org/abs/1702.08591. Perhaps the problem could just be called \\u201cNon differentiable audio tokenisation\\u201d or similar? I don\\u2019t think the dual optimization method is novel, it would be good to find the original paper that implements something like this. PErhaps it would be in the VQVAE literature?\", \"**Lack of threat model:** I\\u2019d like to see your threat model go into depth more about why you focus on white-box attacks and why you need stealthiness constraints. E.g., you can just apply existing white-box attacks to text LLMs already and get bad outputs; why do we care about LALM defense when text isn\\u2019t solved? Isn\\u2019t an attack that elicits a harmful jailbreak that isn\\u2019t \\u201cstealthy\\u201d also a success from the red team\\u2019s perspective? Why does it need to be understandable? These can be addressed in your threat model. Also, you mention in related work that LALMs shouldn\\u2019t be deployed widely if they are not robust, but releasing them as closed source is fine since you can\\u2019t attack with AdvWave.\", \"**Presentation of equations, figures, and results needs to be polished:**\", \"Figure 1: Phase 1 would be nicer on the left. A brief intuition on what each loss is trying to achieve in the caption would be helpful\", \"Section 3.2, in general, is very hard to follow along. L_retent is talked about lot before being explained. Include an intuitive explanation earlier. You introduce the notation for the size mappings of each component, but this makes it more confusing, in my opinion. I would put this in the appendix.\", \"Section 3.5 - There is lots of repetition of equations here (e.g. equ 7 is the same as 5 and 6 similar to 1), it would be great if it could be folded into the other sections for conciseness\", \"I\\u2019m not sure what the perk of having ASR-W is in addition to ASR-L. Often, LLMs are still jailbroken if they say, \\u201cI\\u2019m sorry,\\u201d so I\\u2019d expect ASR-W to have many false negatives. It would be good to manually check the false positive rate of ASR-L.\", \"Figures 2 & 3 need axes labels and should use a color-blind friendly palette (without gradients). Figure 4 has text that is too small.\", \"**Related work is majorly lacking citations and doesn\\u2019t contrast with AdvWave:**\", \"Add related work to white-box attacks on VLMs - your work is very comparable to how people jailbreak VLMs, e.g., https://yunqing-me.github.io/AttackVLM/ , https://arxiv.org/pdf/2306.13213, https://arxiv.org/pdf/2402.02309. Also, vocalising the request is similar to putting typographic text into images (like FigStep, Images are Achilles Heel of Alignment, Jailbreak in pieces)\", \"Add related work to white-box attacks on STT models - this is also very relevant, especially the imperceivable constraints. e.g. \\u201cAudio adversarial examples: Targeted attacks on speech-to-text\\u201d, \\u201cThere is more than one kind of robustness: Fooling whisper with adversarial examples\\u201d.\", \"There are many more papers than I provide here, and I\\u2019d recommend doing a proper literature review.\", \"LALM section - I would cut the section around concerns of misuse. This should be discussed in the intro. You should cite frontier models like Gemini and GPT-4o advanced voice mode.\", \"Jailbreak attacks on LLMs section - you should cite https://arxiv.org/abs/2404.02151\", \"**Adaptive target search seems overly complicated:** why did optimising just for \\u201csure\\u201d as the first token not work? This works in VLM literature. When comparing to optimizing for \\u201csure\\u201d, did you use a prompt like in https://arxiv.org/abs/2404.02151? If not, optimizing for \\u201csure\\u201d alone may be much weaker. I\\u2019d expect if you did this, the ASR would increase. Essentially, using an \\u201cadaptively search optimisation target,\\u201d you find a good starting point, but prompting the model to start the response with \\u201cSure, here is\\u2026\\u201d might mean you don\\u2019t need this component. Also, why can\\u2019t you find a target string from another jailbroken LLM even if it has a very different structure to the output of the LALM? Shouldn\\u2019t gradient-based approaches still be able to change the model to output this?\"], \"questions\": \"Have you thought about measuring how your attacks transfer between models? I\\u2019d love to see transferability in your work since the threat model I think is most concerning is people finding white-box attacks on open-source models that transfer to more powerful closed-source models. See examples here: https://arxiv.org/abs/2403.09766 , https://arxiv.org/abs/2407.15211\\n\\nSmall discussion point on using LALMs. Most of the field uses VLMs for vision language models, so do you think using ALMs would be a better acronym to popularise in the field?\\n\\nI have weaved most of my questions into the weaknesses section. I think this paper has the potential for a much higher rating (especially given the timeliness of getting attacks working on LALMs, which is a neglected area of the adversarial attack literature), but not in its current form. I am happy to increase my score if the weaknesses I highlighted are addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Revision Summary\", \"comment\": \"We thank all reviewers for their valuable comments and feedback! We are glad that the reviewers found our work novel (the first white-box jailbreak attempt against LALMs) and sound with solid evaluation results. Based on the reviews, we have conducted additional experiments and made the following updates:\\n\\n1. Expanded discussions on the importance of stealthiness, added supplementary examples, and clarified stealthiness evaluation methods.\\n\\n2. Introduced the PAIR attack as a strong black-box baseline to emphasize AdvWave\\u2019s superior performance.\\n\\n3. Enhanced explanations on gradient shattering and its implications for AdvWave optimization.\\n\\n4. Clarified the rationale for appending adversarial suffixes instead of modifying original queries.\\n\\n5. Improved presentation throughout the manuscript, including figures and equations.\\n\\n6. Included additional related work on adversarial attacks for speech models and VLM jailbreaks.\\n\\n7. Expanded discussions on the threat model, highlighting the relevance of white-box jailbreak scenarios.\\n\\n8. Evaluated cross-model transferability of adversarial audio on Realtime API on the Advbench-Audio dataset.\\n\\n9. Acknowledged the limitations in adversarial audio transferability and outlined future research directions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"This paper proposes a jailbreak attack against Large Audio Language Models (LALM). The method uses a dual-phase optimization method that addresses the non-differentiable issue, enabling effective end-to-end gradient-based optimization. The method also enforces stealthiness constraints on adversarial audio waveforms to avoid being filtered by noise-detection systems. All the reviewers acknowledge the novelty of this work. Some reviewers question the need to maintain stealthy. In my opinion, the authors did a good job answering most of the concerns raised by the reviewers. Therefore, I recommend accept for this paper.\", \"additional_comments_on_reviewer_discussion\": \"Some reviewers question the threat model, i.e., why do we care about whether the audio is stealthy or not? The authors explained that without the constraints, the unnatural audio quality may be flagged or filtered by noise-detection systems. I am convinced by this explanation, and thus, I lowered the weights of the two reviewers who raised this concern and gave low scores.\"}", "{\"comment\": \"Thanks to the authors for their patient and detailed responses!\\n\\n> increasingly challenging to centralize these models on a single server\\nI agree we can expect e.g. more open source deployment / weight leaks, but your specific example of a deployment on a mobile device seems much more uncertain (which is low stakes for the purposes of this argument).\\n\\n> additional annotators, such as those from platforms like Amazon Mechanical Turk, could be hired for more comprehensive labeling.\\n\\nI'd be excited about future, more systematic evaluation - especially for something as interesting and important as the way you operationalise your stealthiness constraint. That you already also use an ALM to detect suspiciousness of your audio samples makes me confident enough in your more minimal use of human judgment. \\n\\nI think with this discussion my overall grade for the paper remains the same but my confidence has gone up. Updating my review to reflect this.\"}", "{\"summary\": \"This paper presents an innovative adversarial attack method targeting LALMs, marking the first successful attack on LALMs with optimized audio stealth. The efforts are commendable.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The work is groundbreaking as it introduces the first jailbreaking adversarial attack on LALMs. The authors have conducted extensive experimental comparisons, particularly by adapting jailbreaking methods from other domains to the audio sector to ensure the superiority of the proposed attack method. The contribution of this paper is indisputable. However, I still have some questions regarding the audio stealthiness mentioned by the authors.\", \"weaknesses\": \"I believe the design motivation behind the authors' idea might be flawed. The audio provided clearly contains malicious content, so why consider the stealthiness of the adversarial disturbance? A normal listener would already notice something amiss with the content. Adding adversarial noise to silence segments inevitably leads to listeners hearing malicious content followed by eerie noises, which is utterly unconvincing from a realistic perspective. The authors should more reasonably consider the reasons for the stealthiness of adversarial disturbances and integrate them with the application scenarios of LALMs for a rational design.\", \"questions\": \"In the supplementary materials provided, I am puzzled about adding adversarial noise: 1. The authors mention that the adversarial noise is naturalized using urban environmental sounds as a masking method. However, I can still hear the traditional adversarial disturbances beyond the environmental sounds, suggesting the presence of two types of perturbations, which the paper does not mention. 2. The attack audio samples provided have adversarial disturbances implanted at the end silence segments of the audio, occupying about half the duration of the audio itself. It's unlikely for such a high proportion of silence in most audio datasets, revealing a serious issue: can adversarial attacks unrestrictively zero-pad benign audio ensure attack success? This seems to relate to the authors' initial claim that audio attacks on LALMs would limit the optimization search space for adversarial disturbances. I imagine the authors extended the audio to ensure sufficient search space, yet this seems impractical in real situations. 3. I am curious why the adversarial disturbances were added to the silence segments. Semantically rich portions of the audio seem more susceptible to attacks, and placing disturbances in silent parts would make the noise more detectable by human ears.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 7mqQ (Part 1)\", \"comment\": \"We appreciate the reviewer's thoughtful feedback on our paper. Below, we included additional comments to further improve our work.\\n\\n> Q1: AdvWave is not the first approach to jailbreaking LALMs.\\n\\nThank you for the comment! We claim that AdvWave is the first **white-box** attack against LALMs in our revised manuscript.\\n\\n> Q2: More clarifications on the stealthiness constraint.\\n\\nThank you for the insightful question! \\n\\n[Rationale for Enforcing Stealthiness] The motivation for enforcing stealthiness stems from both empirical observations and insights from jailbreak literature. Without this constraint, optimized adversarial audio\\u2014while effective\\u2014often exhibits unnatural, screechy qualities. These anomalies draw undue attention from human auditors and increase the likelihood of being flagged or filtered by noise-detection systems. To illustrate this, we include examples of adversarial audio without stealthiness constraints in the supplementary material. By enforcing stealthiness, we aim to produce adversarial audio that sounds natural, reducing suspicion and bypassing noise filters. This approach parallels advancements in text-based jailbreaks, where recent studies [1,2] enhance fluency and readability to evade perplexity-based filters. To address this, we expanded the discussion in Section 3.1 and added relevant work [3], highlighting that enforcing stealthiness is also considered in adversarial attacks against DNN-based audio models.\\n\\n[Clarifications on Stealthiness Scores of Baselines] We would like to clarify that the baselines (GCG-Trans, BEAST-Trans, and AutoDAN-Trans) optimize adversarial suffixes in text modalities, which are subsequently converted into corresponding audio suffixes using TTS models. As a result, the stealthiness scores of these baselines are not perfect (i.e., not 1.0). In contrast, vanilla generation achieves a perfect stealthiness score of 1.0 because it does not modify the original query. However, adversarial audio suffixes, including those generated by AdvWave, typically reduce stealthiness due to the additional adversarial content introduced.\\n\\n[Rationale for Considering Adversarial Audio Suffixes] In line with approaches from jailbreak literature [4], where adversarial suffixes are appended rather than modifying original tokens, our reasoning is as follows: altering the original query risks introducing semantic changes, potentially leading to false positive jailbreaks. For instance, changing the query \\u201cHow to use a gun for fun?\\u201d to \\u201cHow to use a water gun for fun?\\u201d may prompt a concrete and innocuous response, which would not constitute a successful jailbreak. Therefore, in the context of LALM jailbreaks, we prioritize preserving the original query to maintain its semantics and minimize the risk of false positives.\\n\\n[1] Guo, Xingang, et al. \\\"Cold-attack: Jailbreaking llms with stealthiness and controllability.\\\" ICML 2024.\\n\\n[2] Sadasivan, Vinu Sankar, et al. \\\"Fast Adversarial Attacks on Language Models In One GPU Minute.\\\" ICML 2024.\\n\\n[3] Qin, Yao, et al. \\\"Imperceptible, robust, and targeted adversarial examples for automatic speech recognition.\\\" International conference on machine learning. PMLR, 2019.\\n\\n[4] Zou, Andy, et al. \\\"Universal and transferable adversarial attacks on aligned language models.\\\" arXiv preprint arXiv:2307.15043 (2023).\\n\\n> Q3: Lack of relevant adaptive black-box baselines.\\n\\nThank you for the valuable feedback! We select the three transfer-based baselines since they all do not need gradient information and show variability in the readability of jailbreak prompts that may affect cross-modality transferability. We further add another strong baseline PAIR attack which leverages a red-teaming LLM to refine the jailbreak suffix with black-box feedback of audio-language models. We compare PAIR and AdvWave in Table A. The results show that AdvWave still outperforms PAIR significantly, highlighting the effectiveness of white-box optimization with AdvWave.\", \"table_a\": \"ASR-W/ASR-L of PAIR attack and AdvWave attack on different audio-langauge models.\\n\\n| | SpeechGPT | Qwen2-Audio | Llama-Omni |\\n| - | - | - | - |\\n| PAIR | 0.064 / 0.013 | 0.462 / 0.362 | 0.753 / 0.578 |\\n| AdvWave | 0.643 / 0.603 | 0.891 / 0.884 | 0.981 / 0.751 |\"}", "{\"comment\": \"> Q1: Baseline selection is not strong and lack motivations.\\n\\nI'm very satisfied now that you're comparing to a much stronger baseline. \\n\\n> Q2: Improvement of Figure 1.\\n\\nThe new figure is much easier for me to parse. \\n\\n> Q3: Jailbreak attempts on Gemini and GPT-4o.\\n\\nI understand that since your method is whitebox, it's not possible to do anything other than attempt to transfer attack other models. Do you think that makes this a substantially weaker / less realistic jailbreaking method? How is this change reflected in your threat model for your attack method?\\n\\n> Q4: More details on human judge of stealthiness score.\\n\\nThis is helpful context that should be in the paper. I'm not sure how robust the method described is, but nor am I sure of what a better method would be for labelling how stealthy your audio transformations are. Were the authors of the paper some of the judges? How many judges did you have? What did you do to try and make this a repeatable process? What would you have done to make the judging of your audio more consistent/systematic?\"}", "{\"comment\": \"Dear Reviewer 7mqQ,\\n\\nAs the discussion period draws to a close, we would like to kindly inquire if you have any further questions or feedback regarding our work. Your insights and suggestions have been valuable in helping us improve our work.\\n\\nAdditionally, we hope you might consider reevaluating your rating in light of the updates and clarifications provided during the rebuttal process.\\n\\nThank you again for your time and thoughtful review.\\n\\nSincerely,\\nAuthors of AdvWave\"}", "{\"comment\": \"Dear Reviewer cSgt,\\n\\nAs the discussion period draws to a close, we would like to kindly inquire if you have any further questions or feedback regarding our work. Your insights and suggestions have been valuable in helping us improve our work.\\n\\nAdditionally, we hope you might consider reevaluating your rating in light of the updates and clarifications provided during the rebuttal process.\\n\\nThank you again for your time and thoughtful review.\\n\\nSincerely,\\nAuthors of AdvWave\"}", "{\"comment\": \"Thanks for your response. I still have the following concerns:\\n\\n1. In Table A, is PAIR used in the transfer setting, i.e. the text suffix it generates is converted into speech via TTS? If yes, then my concerns about the weak baseline remains. I suggested that in addition to the transfer setting you must show the results in the text-only setting as well to demonstrate the advantage of using the speech-based attack.\\n1. I am confused by you statement that you want to evaluate the \\\"shape similarity of the *waveforms*\\\" by computing the cosine similarity of the mel-spectrum. Please clarify. Also, is there past works that have used cosine similarity to compare mel-spectrograms? If so, please provide citations.\\n\\nThank you for providing clarification on gradient shattering. I would recommend adding a sentence to define gradient shattering in the paper in order to avoid any confusion.\"}", "{\"comment\": \"Thank you for providing this result. My concerns have been addressed and I have increased my score.\\n\\nGood work!\"}", "{\"title\": \"Response to Reviewer cSgt\", \"comment\": \"We appreciate the reviewer's thoughtful feedback on our paper. Below, we included additional comments to further improve our work.\\n\\n> Q1: More discussions of why we want to enforce stealthiness in jailbreaks.\\n\\nThank you for the suggestion! The objective of enforcing stealthiness during optimization is motivated by empirical observations. Without the stealthiness constraint, the optimized adversarial audio, while effective, often sounds screechy. This unnatural quality draws undue attention from human auditors and risks being flagged or filtered by noise-detection systems. For illustration, we include examples of adversarial audio without the stealthiness constraint in the supplementary material.\\nBy enforcing stealthiness, we aim to make the adversarial audio sound natural, minimizing suspicion and avoiding detection by noise filters. This motivation aligns with text-based jailbreaks, where recent works [1,2] enhance fluency and readability of adversarial prompts to bypass perplexity-based filters.\\nWe included the discussions into Section 3.1.\\n\\n[1] Guo, Xingang, et al. \\\"Cold-attack: Jailbreaking llms with stealthiness and controllability.\\\" ICML 2024.\\n\\n[2] Sadasivan, Vinu Sankar, et al. \\\"Fast Adversarial Attacks on Language Models In One GPU Minute.\\\" ICML 2024.\\n\\n> Q2: Why do we add the adversarial segment as a suffix instead of adding it to the original query? The length of the adversarial suffix is non-trivial. The audio sample provided also contains white noises in addition to natural sounds.\\n\\nThank you for the insightful question!\\nWe would like to clarify that, similar to the approach in jailbreak literature where adversarial suffixes are added [3] instead of modifying the original tokens, our reasoning is as follows: altering the original query could lead to semantic changes, which might result in false positive jailbreaks. For example, if the query \\u201cHow to use a gun for fun?\\u201d is modified to \\u201cHow to use a water gun for fun?\\u201d, the model is likely to respond concretely, but this would not qualify as a successful jailbreak. Thus, in the context of LALM jailbreaks, we aim to keep the original query fixed to preserve its semantics and avoid such false positives.\\n\\nIn response to your comments on audio suffix lengths, we have included ablation studies of adversarial suffixes with varying lengths in Table A. The results indicate that when the adversarial suffix length exceeds 50 audio frames, AdvWave becomes less sensitive to further changes in suffix length while consistently demonstrating high ASR (attack success rates). For context, one audio token typically corresponds to approximately 0.05 seconds, assuming standard sampling rates and window sizes. Consequently, the audio suffix lengths we tested are within reasonable ranges. Since the suffix is masked by natural sounds, it typically resembles background noise, enhancing its stealthiness.\\n\\nLastly, thank you for reviewing our audio samples and highlighting the presence of white noise alongside the natural sounds in the optimized audio. We believe this noise is within acceptable limits and does not significantly impact stealthiness, as evidenced by our quantified stealthiness score. Nonetheless, incorporating additional smoothness penalties into our optimization framework could potentially address this issue. We have clarified this point in Section 3.4.\", \"table_a\": \"Attack success rates ASR-W and ASR-L with SpeechGPT on Advbench-Audio dataset.\\n\\n| Length of adversarial audio suffix | 10 | 30 | 50 | 70 | 90 |\\n|----------|--------|--------|--------|--------|--------|\\n| ASR-W | 0.296 | 0.499 | 0.643 | 0.676 | 0.699 |\\n| ASR-L | 0.245 | 0.563 | 0.603 | 0.621 | 0.633 |\\n\\n\\n[3] Zou, Andy, et al. \\\"Universal and transferable adversarial attacks on aligned language models.\\\" arXiv preprint arXiv:2307.15043 (2023).\"}", "{\"title\": \"Follow-up discussion with Reviewer vSXc\", \"comment\": \"We thank the reviewer\\u2019s thoughtful comments again!\\n\\n> Practicality of AdvWave as a white-box jailbreak method.\\n\\nWe consider both white-box and black-box optimization to be practically significant. As the widespread deployment of large models continues, it becomes increasingly challenging to centralize these models on a single server. For example, deploying a model on a mobile device effectively creates a white-box scenario for users. Therefore, we assume that studying white-box jailbreaks against LALMs is a meaningful research direction to inform and enhance the future alignment of these models.\\n\\n> Human judge details.\\n\\nThe details of the human judgment process are provided in Appendix A.5 and referenced in Section 4.1. Currently, two human annotators (the paper's authors) evaluate all audio clips, and the final human judgment score is calculated as the average of their scores. To enhance the annotation process, additional annotators, such as those from platforms like Amazon Mechanical Turk, could be hired for more comprehensive labeling.\"}", "{\"summary\": \"This paper presents a gradient-based jailbreak attack against Large Audio Language Models (LALM). The proposed method optimizes an adversarial audio suffix that bypasses the safety alignment of the LALM and causes it to produce harmful outputs. To account for the discretization performed to convert continuous audio representations into discrete tokens, a \\\"dual-phase\\\" optimization method is proposed whereby, first, the discrete token sequence is optimized to produce the desired harmful output and then the audio suffix is optimized to yield the discrete audio token sequence. Additionally, an adaptive search procedure is proposed to determine the best target for the adversarial loss optimization, and a loss component is introduced to make the adversarial suffix resemble a given environmental sound. Results show that compared to baselines the proposed approach greatly improves attack success rates.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. The paper is well written and generally clear\\n1. The proposed approach is novel and fills an important gap in the current literature.\\n1. The proposed attack is successful on diverse models which indicates its generalizability\\n1. Using an audio classification loss to make the adversarial suffix resemble natural sounds is an interesting and novel approach\", \"weaknesses\": \"1. The -Trans baselines seem to weak because these attacks tend to introduce symbols, like special characters, punctuations and emojis, that are not vocalizable so it is expected that generating speech from them will produce weak results. I recommend presenting results for the text-only attack along with the -Trans attack. This way the actual advantage of exploiting the auditory modality will become apparent.\\n 1. A better baseline could be to adversarially attack an ASR model that uses the same audio encoder as the LALM such that the target transcription is the text-only attack string.\\n\\n1. More details about the human evaluation score ($S_{\\\\text{Human}}$) are needed, including the number of raters, inter-rater agreement, and did all raters rate all the test audios.\\n1. The normalization used for the stealth scores seems to be weight the components unfairly. The NSR and cosine are normalized by their theoretic maximum, while the human score is unnormalized so if the actual NSR and cosine scores occupy a smaller range then their contribution to the score will be penalized. A better normalization scheme might be to normalize the mean to 0.5 and standard deviation to 0.125.\\n1. The presentation can be improved:\\n 1. Phase II is to the left of Phase I in Figure 1. I suggest reorganizing it to make it appear to the right.\\n 1. The phrase \\\"gradient shattering\\\" or \\\"shattered gradients\\\" is confusing here because in prior work it refers to the specific phenomenon that as neural networks become deeper their gradients resemble white noise [1]. The particular phenomenon of relevance in this study is generally referred to as \\\"gradient obfuscation\\\" or \\\"obfuscated gradients\\\".\\n 1. The phrase \\\"retention loss\\\" is confusing because it is not clear what is being retained. The target discrete token sequence can not be \\\"retained\\\" because the encoder currently does not output it and it is being optimized to do so. Perhaps, \\\"alignment loss\\\" or \\\"sequence loss\\\" might be better.\\n 1. It is not clear from equation 2 that only a suffix is being optimized. It appears that the entire audio is being optimized.\\n\\n\\n[1] Balduzzi, David, et al. \\\"The shattered gradients problem: If resnets are the answer, then what is the question?.\\\" International conference on machine learning. PMLR, 2017.\", \"questions\": \"1. Why is noise-to-signal ratio used instead of the more common signal-to-noise ratio? Is it computed in a similar manner as SNR? The normalization and subtraction yields a quantity that is proportional to SNR so perhaps its simpler to just use SNR.\\n1. How exactly is $S_{\\\\text{Mel-Sim}}$ computed? The mel spectrogram is a matrix so how exactly is the cosine similarity computed? \\n 1. Why is cosine similarity used instead of L2 distance that is commonly used to compare mel spectrograms? I am not sure if the cosine similarity has a reasonable interpretation for mel spectrograms.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety', 'Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"The paper proposes a jailbreak attack against Large Audio Language Models that can enable users to extract harmful information from these models cause them to respond to other users in a harmful manner.\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0BBzwpLVpm
Learning Identifiable Concepts for Compositional Image Generation
[ "Shaoan Xie", "Yujia Zheng", "Ignavier Ng", "Kun Zhang" ]
Humans have the ability to decompose objects into parts and relationships and create new objects by properly combining existing concepts. However, enabling machines to achieve this in real-world tasks remains a challenge. In this paper, we investigate how to teach machines compositional image generation through learning identifiable concepts. To derive concepts from attribute labels, we formulate the minimal change principle and propose a method to limit the information introduced by each label. Additionally, to address dependent attribute labels (with causal influences in between or common causes behind them), we present a causal conditioning approach to disentangle concepts from these correlations. Our framework enhances data efficiency, interpretability, and control, while enabling sampling from unseen combinations. We validate our method on various compositional image generation and editing tasks, demonstrating its effectiveness through superior performance.
[ "concept; composition; image generation" ]
https://openreview.net/pdf?id=0BBzwpLVpm
https://openreview.net/forum?id=0BBzwpLVpm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yBb8ei9Yao", "xuzM9NYNkk", "vlFnO4LxQ3", "TIaUBzLbOG", "Lh9w0DbI6j" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730653627781, 1730373797215, 1731658248108, 1730546012107, 1729726760674 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8486/Reviewer_Apr6" ], [ "ICLR.cc/2025/Conference/Submission8486/Reviewer_X3TZ" ], [ "ICLR.cc/2025/Conference/Submission8486/Authors" ], [ "ICLR.cc/2025/Conference/Submission8486/Reviewer_A5LF" ], [ "ICLR.cc/2025/Conference/Submission8486/Reviewer_3gsM" ] ], "structured_content_str": [ "{\"summary\": \"This paper addresses an intriguing problem: compositional image generation. It introduces the minimal change principle and proposes a method to limit the information introduced by each label. A causal conditioning approach is employed to disentangle concepts from correlations. The effectiveness of this method is validated across several tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Compositional image generation is a critical and practical problem, and this paper proposes a method to address it.\", \"The paper presents an identifiable guarantee for learning the underlying concepts.\", \"The generated images are promising, demonstrating the potential of the proposed method.\"], \"weaknesses\": \"1. This method relies on pre-defined attributes, which limits the method's practical applicability.\\n\\n2. Additionally, the proposed methods are evaluated only on simple datasets, which may not adequately represent complex real-world scenarios.\", \"questions\": \"1. When the attributes of a dataset are not directly accessible, how can they be retrieved?\\n\\n2. The current method utilizes a GAN-based model as the foundation. Is it feasible to implement this approach using a diffusion model instead? \\n\\n3. Additionally, how many attributes can this method manage effectively? If we aim to train a general-purpose model that can handle more than a thousand attributes, what strategies should be employed to address this scenario?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a GAN-based framework for learning identifiable concepts. Given ground-truth attribute labels, random noise is transformed into latent representations aligned with these labels, and sparsified using learnable masks to enforce a minimal change principle. To mitigate existing correlations between certain attributes, the authors explicitly identify causal relationships among attributes and factorize the labels to remove dependencies. Empirical results demonstrate that the proposed method outperforms baselines in terms of data efficiency and controllability.\", \"the_main_contributions_of_the_paper_are_as_follows\": [\"Formulation of the minimal change principle to learn compositional concepts, along with an efficient approach to factorize causally related attributes.\", \"Theoretical proof that the proposed method can recover ground-truth concepts.\", \"Empirical evidence showcasing improved data efficiency and controllability.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The idea of transforming labels to identify and disentangle causal relationships among attributes is interesting, and the authors have effectively demonstrated its impact in the experimental results.\", \"The proposed method significantly outperforms the baselines, both qualitatively and quantitatively, validating its practical advantages in achieving high-quality, controllable image generation, even in low-data settings.\"], \"weaknesses\": [\"It is unclear how the proposed method learns compositional concepts more effectively or in a fundamentally different way compared to existing approaches. Since the baselines also leverage disentangled ground-truth attribute labels, wouldn\\u2019t they similarly be capable of learning a generative model for compositional generation? In a similar context, it\\u2019s not fully explained why the proposed method is more data-efficient than the baselines. A more detailed elaboration on these points would strengthen the paper.\", \"The paper introduces several components (e.g., sparsity loss, learnable masks, $\\\\mathbf{z}^{\\\\text{null}}_i$\\u200b), but the justification for each component and their connections seems weak. It is a bit confusing as a reader to understand why each part is necessary. Please refer to the questions below for specific points on this aspect.\"], \"questions\": [\"What is the role of $\\\\mathbf{z}_c^*$ in equation (1)? It seems like it should encode information not represented by annotated labels (e.g., nuanced details). However, isn\\u2019t this type of information typically handled by the random noise $\\\\epsilon$? Does including $\\\\mathbf{z}_c^*$ have a significant impact on performance?\", \"What is the role of $\\\\mathbf{z}^{\\\\text{null}}_i$ in equation (6)? What kind of information is it intended to encode?\", \"It is hard to fully understand why enforcing the sparsity loss in equation (7) induces the minimal change principle. While Lines 522\\u2013524 suggest that constraining the representation\\u2019s dimensionality limits redundant information, this rationale is not entirely convincing. The minimal change principle, as described by the authors, states that \\\"the influence brought by each ground-truth concept should be minimal,\\\" which implies that changes in representation space should translate to minimal changes in the output space (e.g., altering the \\u2018age\\u2019 should yield the same image but with a different age). However, the sparsity loss in Equation (7) seems to restrict the input representation space rather than the changes in the output space, making it unclear how this connects to the minimal change principle.\", \"It would be better to use distinct notations for $\\\\mathbf{z}_i$ in equation (3) and (6) as they are clearly denoting different variables.\", \"Does $\\\\mathbf{m}_i$ in L177 refer to $\\\\mathbf{A}_i$?\", \"In Figure 6, the authors claim that foundation models (e.g., GPT-4o) generate unrealistic images for unseen attribute combinations. However, all images generated by GPT-4o in Figure 6 appear unnatural, suggesting that the poor results might not be due to rare attribute combinations but other factors, such as improper prompts provided to the model. Could the authors clarify if proper prompt was used, and whether different prompts might correct GPT-4\\u2019s performance on unseen combinations?\", \"In Table 8, which evaluates generation performance on human faces, it would be more comprehensive to include metrics for other generative models (e.g., GPT-4o, Meta AI, Stable Diffusion 3, as in Figure 6) for comparison.\", \"Between the sparsity condition and causal conditioning, which component is the key factor that causes the proposed method to succeed where the baselines fail in Figure 5? Would simply applying causal conditioning to the baselines improve their performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank every reviewer for your thoughtful feedbacks and we will edit our manuscript according to your suggestions. Thanks!\"}", "{\"summary\": \"This paper presents the minimal change principle and causal conditioning to allow generative models to create compositional images with clear, identifiable concepts. The central idea is to control image attributes without inducing unintended changes. To accomplish this, the authors regularize the model to learn the minimum dimensions needed to edit an attribute and use causal discovery algorithms to disentangle dependent attributes. The authors empirically and theoretically demonstrate that this approach enables models to learn attributes that are both identifiable and composable.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The minimal change principle is intuitive and makes sense.\", \"The concept of causal conditioning is interesting and intuitive.\", \"The proposed method achieves superior FID scores on MNIST4 and Car9 datasets compared to StyleADA and AugGAN.\"], \"weaknesses\": [\"There is no quantitative comparison showing if the model controls attributes better than the baselines. For example, metrics like Editing\", \"FID from StyleRes could be used to demonstrate controllability.\", \"Baselines about image editing and compositional image generation are missing.\", \"CausalGAN (Kocaoglu, et al. \\\"Causalgan: Learning causal implicit generative models with adversarial training.\\\" 2017.)\", \"AugGAN(on FFHQ) (Hou, et al. \\\"Augmentation-aware self-supervision for data-efficient GAN training.\\\" 2024.)\", \"StyleRes (Pehlivan, et al. \\\"Styleres: Transforming the residuals for real image editing with stylegan.\\\" 2023.)\", \"HyperStyle (Alaluf, et al. \\\"Hyperstyle: Stylegan inversion with hypernetworks for real image editing.\\\" 2022.)\", \"StyleTransformer. (Hu, et al. \\\"Style transformer for image inversion and editing.\\\" 2022.)\", \"Except for Figure 8, there is no metric provided for editability or composability, making it difficult to assess whether the proposed method learns more identifiable concepts than the baselines. Additionally, in the ablation studies, it is challenging to gauge the effectiveness of the proposed components without metrics for editability or composability.\"], \"questions\": [\"Regarding Section 3.4, it\\u2019s unclear why inversion cannot be done in the $z$ space or $w$ space. Would it be possible to move the input of $f\\\\_i$ to $z$ or $w$ space and perform inversion in $z$ instead?\", \"It is unclear why the first row of Table 2 is labeled as \\\"Ours.\\\" It appears to correspond to StyleGAN2-ADA.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors study compositional attribute generation and editing in image synthesis models. They argue relatively convincingly that the current large-scale image generation models fail to generate uncommon attribute labels (e.g. \\u201cfemale\\u201d + \\u201cfacial hair\\u201d), and propose a methodology to address this through the use of masks learned with a causal structure. The results show the method produces images that do not exhibit mode collapse like the baselines. In the case of editing real images, there is significant improvements to the editing of rare attribute combinations over recent work.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The studied problem of generating unseen attribute combinations is a pertinent one, with important implications for under-represented demographics and subpopulations. The authors did a convincing job with Figure 1 and in the introduction of motivating the problem with current large-scale image synthesis models, and making the benefits of the proposed solution salient.\", \"I appreciate that the experiments are relatively thorough in exploring multiple forms of image synthesis. Not only do the authors consider unconditional synthesis, but they also show how one can edit real images, greatly improving the contribution of their method.\"], \"weaknesses\": \"## [W1] Trade-off in performance for common attributes\\n\\nIn Figure 7, whilst the method clearly excels at generating rare attribute combinations (e.g. female + goatee), it fails in other cases to make more common edits (e.g. +blonde hair, or +bald).\\n\\nTo me, this seems very problematic. Almost by definition, most users will be interested in generating common attribute combinations. The fact that the method works so well on unseen combinations is a testament to its potential value, to be clear, but trading-off functionality for common edits at the same time seems like a clear and fundamental limitation. What use case does the proposed method serve if it\\u2019s at the cost of the common attribute combinations? In my view, this is the primary issue with the paper.\\n\\nAt minimum, I would expect to see a detailed discussion of this trade-off, and a solid justification for why it is worth making. Do the authors have any insights into why this might be happening? Furthermore, an insightful study would be one that quantifies the \\\"accuracy\\\" of edits for common vs uncommon attributes -- one could train a CelebA binary classifier to classify if an edited image actually depicts the new attribute or not, and one could see a breakdown of the performance for common vs rare attributes.\\n\\n## [W2] Lack of convincing baselines for independent attribute datasets\\n\\nI am not convinced that the authors do a good job of showcasing the benefits of their method in the independent attribute setting (Table 1 and Figure 5). Concretely, it is worrying that the baseline methods mostly fail to generate anything coherent at all (~20x as large FID scores). This really does not tell us much other than the baselines failed to train well (which could be for any number of reasons).\\n\\nThe authors could do a better job training the baseline models for a fairer comparison (e.g. perhaps with significant data augmentation, or through differentiable techniques such as [1]). Ultimately, we are not interested in the image quality itself, but instead in how well they perform in the \\u201cOut-FID\\u201d row on the rare attribute combinations. Through better training of the base models, we can isolate the impact of the proposed method on this row of interest without the confounding variable of the raw image synthesis quality in the way.\\n\\n## minor\\n\\nThe paper is full of typos, and some poorly written sentences. Just to mention a handful of examples from the introduction alone on the second page:\\n\\n- [L64] leads to \\u2192 lead to\\n- [L66] Ssadow \\u2192 Shadow\\n- [L72] mkae\\u2192 make\\n\\nUltimately these typos are indicative of a lack of care for presentation, and at times this renders the sentences hard to parse which I found often detracting from the content of the paper. I suggest some careful proof-reading is needed before the camera-ready or resubmission.\\n\\n---\\n\\n[1] Zhao et al. \\u201cDifferentiable Augmentation for Data-Efficient GAN Training.\\u201d NeurIPS 2020.\", \"questions\": \"It seems a relatively big limitation that the method relies on such rigid one-hot labels when the modern paradigm of image editing involves free-form textual descriptions. Do the authors envision easy ways to extend this to continuous or multi-label attributes, or free-form text? A discussion of the proposed binary attribute paradigm relates to the common free-form text editing one -- and their relative strengths -- would be insightful here.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0ApkwFlCxq
ComputAgeBench: Epigenetic Aging Clocks Benchmark
[ "Dmitrii Kriukov", "Evgeniy Efimov", "Kuzmina Ekaterina", "Ekaterina Khrameeva", "Dmitry V. Dylov" ]
The success of clinical trials of longevity drugs relies heavily on identifying integrative health and aging biomarkers, such as biological age. Epigenetic aging clocks predict the biological age of an individual using their DNA methylation profiles, commonly retrieved from blood samples. However, there is no standardized methodology to validate and compare epigenetic clock models as yet. We propose ComputAgeBench, a unifying framework that comprises such a methodology and a dataset for comprehensive benchmarking of different clinically relevant aging clocks. Our methodology exploits the core idea that reliable aging clocks must be able to distinguish between healthy individuals and those with aging-accelerating conditions. Specifically, we collected and harmonized 66 public datasets of blood DNA methylation, covering 19 such conditions across different ages and tested 13 published clock models. We believe our work will bring the fields of aging biology and machine learning closer together for the research on reliable biomarkers of health and aging.
[ "biological age", "epigenetic aging clocks", "DNA methylation", "aging biomarkers", "longevity" ]
Reject
https://openreview.net/pdf?id=0ApkwFlCxq
https://openreview.net/forum?id=0ApkwFlCxq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wFJjL6SJn6", "tn6t0zYMH7", "si4L0GVl1j", "oo4Dqjghdw", "mUb4BHTUn4", "mTVkM6gR2I", "lG0uEi7LMK", "l3nAcdbUiE", "k8pN0imXix", "gchCmrZIF6", "aKHijCvtIr", "WB2h3tHWZm", "TZfglRmrw9", "RIu2ORsJ77", "PXmdnu6qgo", "FDyeJj1oUd", "DQbvsNzSwh", "AqvXU714yc", "AHhJTH28jf", "8qa6od9qKV", "8hJfvp8YTe" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732780386540, 1732782168436, 1732126356473, 1731347874996, 1732641811581, 1730577881483, 1732119627349, 1732121714947, 1732120901993, 1732738461244, 1732641772865, 1730598530948, 1737523873050, 1732745354866, 1732126321032, 1732755718176, 1732727975102, 1731305334113, 1732627289620, 1732312816294, 1734883268796 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Reviewer_PX5Y" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Reviewer_jKRR" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Reviewer_3vvy" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7894/Reviewer_3vvy" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Reviewer_XgQd" ], [ "ICLR.cc/2025/Conference/Submission7894/Reviewer_XgQd" ], [ "ICLR.cc/2025/Conference/Submission7894/Authors" ], [ "ICLR.cc/2025/Conference/Submission7894/Reviewer_jKRR" ], [ "ICLR.cc/2025/Conference/Submission7894/Area_Chair_kHcT" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely thank you for your thoughtful feedback, which enhanced the quality of our manuscript, and for increasing your score.\"}", "{\"title\": \"Post-discussion\", \"comment\": \"Dear reviewers,\\n\\nAs the discussion period is about to end, we kindly want to thank all reviewers for their time and useful comments. Also, please let us know if there are any other questions or comments, which we are ready to address promptly.\", \"our_submission_in_a_nutshell\": [\"Clear **criteria and methodology** of assessment tasks for comparing clock models\", \"Systematic curation of **66 public datasets**\", \"Comparison of 13 published biological aging clock models\", \"19 health conditions across different ages and populations\", \"Sincerely, Authors of submission 7894\"]}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**Q1: ...developing new methods on epigenetic aging clocks...**\\n\\nThe dataset, gathered for benchmarking, is also suitable for differential methylation analysis, clustering, and developing epigenetic clocks. It enables exploration of novel clocks **targeting CpG sites distinguishing aging-accelerating conditions (AACs) from healthy controls (HCs)**. Spanning **19 AACs, both genders, and a 90-year age range**, it offers valuable opportunities for clock research. While the primary aim was to validate clocks for estimating accelerated aging, not training them, the HC cohort can still support clock training, complementing ongoing open-access efforts [R1].\", \"reference\": \"R1. Ying, K., ... & Gladyshev, V. N. (2023). A Unified Framework for Systematic Curation and Evaluation of Aging Biomarkers. bioRxiv, 2023-12.\\n\\n**Q2: How does the dataset account for demographic and biological diversity?...**\\n\\nThank you for your comment. While collecting the dataset, we strived to find balance between data quantity and diversity. Because the published sets of DNA methylation (DNAm) data come from different studies with varying goals, they often lack a thorough annotation of patient health and other conditions. By aggregating as much data sources as possible, without excluding any genders or ethnicities, we presented the most representative collection of human DNAm profiles in health and age-related disease. The **details on age and gender distribution** within AACs and HCs per dataset are provided in **Appendix figures A1, A2** (Ref. section A.8). Clearly, there is room for improvement in terms of data sources, which we outlined explicitly in section **A.1**. Additionally, we have calculated the following Table R1 with sample counts, showing that the majority of data is unlabeled by ethnicity (and sometimes gender), even though we did our best to find quality data.\\n\\nTable R1. Sample counts across by ethnicities and genders (M=male, F=Female, U=Unknown) in the aggregated dataset.\\n| Ethnicity | Gender | AAC | HC |\\n|------------|--------|------|-----|\\n| Unknown | M | 1620 | 603 |\\n| | F | 1476 | 849 |\\n| | U | 957 | 417 |\\n| White | M | 1884 | 364 |\\n| | F | 577 | 398 |\\n| Black | M | 699 | 69 |\\n| | F | 36 | 17 |\\n| | U | 3 | 1 |\\n| Hispanic | M | 97 | 55 |\\n| | F | 50 | 48 |\\n| Asian | M | 3 | 4 |\\n| | F | 10 | 9 |\\n| Other | M | 159 | 14 |\\n| | F | 3 | 0 |\\n\\n**Q3: ...to balance the dataset across ... AACs and healthy controls, or to mitigate known biases in the sample selection process?**\\n\\nTo ensure that the datasets were not biased towards specific AACs, we first defined a set of criteria for AACs and datasets, targeting **major human organ systems** and then acquired 66 datasets, covering 19 of the 32 identified conditions (Ref. Table A2 for details). We aimed to assemble a comprehensive dataset to validate aging clocks, while **mitigating any bias in sample selection through the diversity and large number of studies** included. To prevent scenarios where a given aging clock might favor better predictions for a particular class of conditions (e.g., ISD), we present a **decomposition of the clock\\u2019s AA2 and AA1 scores**, as shown in Figure 3E,F. Additionally, we address the balance across categories by presenting sample distributions across conditions, ages, and demographic groups in Appendix Figures A1 and A2. These decompositions effectively **prevent misinterpretations and reduce potential clock bias caused by dataset imbalances across the AACs**.\\n\\n**Q4: ...datasets already exist publicly, reduces the novelty of the benchmark. However, ... putting together 66 datasets ... is a contribution...**\\n\\nIndeed, gathering and curating a new dataset of this size is a complex and a time-intensive endeavor (took us three years), because there are no options for an immediate selection of data relevant to the clock research. Collecting a large number of new DNAm samples from humans is increasingly difficult due to the high costs, significant time investment, and ethical challenges. This is also the reason why many clock-related papers **use chaotically varying sets of data**: everyone uses what they could find, with different teams managing to pre-process and split the data differently. Unfortunately, the rationale behind including or excluding data on a particular health/disease condition is rarely articulated as well. \\n\\nHence, by introducing **a clearly stated methodology for conditions selection** and by consolidating open-access datasets, we aspire to remove these barriers and to provide the research community with a valuable, easily accessible resource to accelerate the validation of aging clocks in a standardized fashion.\"}", "{\"summary\": \"The author introduces a benchmark designed to evaluate models of the epigenetic aging clock. The benchmark includes 66 datasets containing DNA methylation data that meet specific conditions and corresponding metadata, with a total sample size of 10,410. Four tasks are proposed to assess the models\\u2019 ability to distinguish between healthy individuals(HC) and age-accelerating conditions(ACC). Results of these four tests are summarized into Cumulative Benchmarking Score. The benchmark framework also includes 13 previously published models results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The author critiques previous benchmarks for being either small in scale, limited to predicting chronological age, lacking standardized datasets, comparing only a limited number of models, or relying on mortality and disease data that have restricted access.\\n\\nThe proposed benchmark seems address all of these limitations. Derived from publicly accessible data, it includes processing of data from both age accelerating condition (ACC) and healthy control (HC) groups to test model\\u2019s ability to distinguish between these conditions. Diseases with ACC are well considered. The benchmark includes 4 well-defined tasks with a summary score and evaluates 13 previously published models.\", \"weaknesses\": \"The paper is well-written and comprehensive overall, but several technical points need further clarification:\\n\\n1. The selection of metrics for benchmark tasks requires more justification. Specifically, why do tasks 2, 3, and 4 report median instead of the mean? Additionally, task 4 mentions the \\\"presence of covariate shift,\\\" but this shift is not clearly explained. Could the authors specify the covariate shift further ?\\n\\n2. The rationale behind the summary benchmark score requires further explanation. Why was this scoring method chosen, and what are its advantages? Also, what does \\\"positive bias\\\" refer to in this context? In the Results section, it is stated that $S_{AA1}$ is adjusted by a ratio to penalize prediction bias, yet this concept of prediction bias remains unexplained. Further clarification on what prediction bias entails here would be beneficial.\\n\\n3. It appears that plots C and D in Figure 3 may be incorrectly presented. Plot D should likely represent $Med(|\\\\Delta|)$ rather than $Med(\\\\Delta)$, as all points are above the diagonal. Please clarify if this is a mislabeling or if I have misunderstood the data shown.\", \"questions\": \"Please see my questions in the above weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nPlease kindly let us know if you are satisfied with our responses and if we can do anything else to support our work.\"}", "{\"summary\": \"The paper proposes ComputAgeBench, a unified framework and benchmark dataset for evaluating epigenetic aging clocks, which are predictive models for estimating biological age based on DNA methylation data. The framework aggregates 66 public datasets covering 19 aging-accelerating conditions, along with 13 published epigenetic clock models, to assess model performance consistently across a standardized dataset. The methodology incorporates rigorous evaluation criteria to test each model\\u2019s ability to distinguish between healthy individuals and those with accelerated aging conditions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"### Strengths\\n\\n The paper is clear and well-written, providing a solid foundation for its contributions. It presents a unified framework for evaluating epigenetic aging clocks, covering both first- and second-generation clocks. By introducing a benchmark dataset, the authors enable comprehensive testing of multiple epigenetic clock methods. \\n\\nThis work has potential to significantly impact the field of biological aging, as it offers a standardized dataset that can facilitate consistent evaluation across various epigenetic clock methods. Such a resource will likely streamline method comparison and improve reliability in aging research.\", \"weaknesses\": \"In reviewing the proposed benchmark in this paper, several key areas for improvement have emerged, particularly concerning data diversity, balance, and bias.\\n\\n \\n\\n### Weaknesses \\n\\n \\n\\n1. **Limited Report on Data Diversity**: The paper lacks adequate details on demographic and biological diversity, such as age, ethnicity, and health variations. Including these would improve the dataset's representativeness for broader applications. \\n\\n \\n\\n2. **Data Balance and Bias**: The authors do not address balance across categories (e.g., AACs vs. healthy controls) or potential sampling biases. This oversight may skew benchmarking results and limit generalizability. \\n\\n \\n\\n3. **Absence of Bias Mitigation**: No strategies are mentioned to detect or reduce dataset biases, which is crucial for fair benchmarking in aging prediction models, where demographic factors can affect DNA methylation patterns and model performance. Additional evaluation metrics for fairness would increase the strength of this benchmark. \\n\\n \\n\\n4. **Put Together Publicly Available Dataset**: The proposed dataset, to my understanding, is a collection of existing publicly available datasets. The authors do not present to the research community a new benchmarking dataset, they rather collect existing datasets that they put together with a published harmonization technique. \\n\\nThe fact that the datasets already exist publicly, reduces the novelty of the benchmark. However, I cannot ignore that putting together 66 datasets into a single dataset is a contribution that would facitilitate the comparison of epigenetic clock methods.\", \"questions\": \"### Questions for the Authors\\n\\nIn evaluating the dataset and methodology presented, several questions arose that could help clarify the dataset\\u2019s potential applications and limitations.\\n\\n1. **Applicability for Method Development**: Can this dataset be effectively used for developing new methods on epigenetic aging clocks, or is it primarily intended for benchmarking and evaluation? Are there features or structures in the dataset that support novel method exploration?\\n\\n2. **Data Diversity and Representativeness**: How does the dataset account for demographic and biological diversity? Could the authors provide more details on the inclusion criteria to ensure the dataset is representative of a broad population?\\n\\n3. **Addressing Balance and Bias**: Were any steps taken to balance the dataset across aging-accelerating conditions (AACs) and healthy controls, or to mitigate known biases in the sample selection process?\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely thank you for your thoughtful and encouraging reviews. Your recognition of our work as *\\u201clikely to streamline method comparison and improve reliability in aging research\\u201d* (JKRR), *\\u201cwell-defined and relevant for the domain\\u201d* (3vvy), *\\u201ca pleasure to read\\u201d* with metrics that *\\u201cfoster progress in aging clock research\\u201d* (XgQd), and addressing key limitations in the field (PX5Y) inspires us greatly.\\n\\nSeveral reviewers have expressed concerns regarding the relevance of our work to the ICLR community. In response, we emphasize that **biological age represents an interpretable latent variable** derived from primary biomarkers, with wide-ranging applications in computational and practical domains, including predicting mortality, assessing morbidity risk, and evaluating the efficacy of potential longevity interventions. Additionally, biological age **inherently lacks ground truth values**, that casts similarities with out-of-distribution (OOD) detection, unsupervised, self-supervised learning and other actively discussed topics at ICLR that heavily relies on data representations. We view our work as an invitation to the broader data science community to explore this challenging and impactful application of representation learning, **bridging the gap between clinical practice and machine learning**.\\n\\nWe deeply value your constructive suggestions and will work to incorporate them to further strengthen the manuscript. Please kindly refer to individual responses below for our point-by-point responses.\\nThank you again for your time and insights. We are available for the follow-up discussion and further clarifications.\\n\\nBest regards,\\n\\nAuthors of Submission 7894\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**Q1: The selection of metrics ... \\\"presence of covariate shift,\\\" but this shift is not clearly explained.**\\n\\nFirst, we want to thank you for pointing out a typo: in the description of the AA1 task (Section 3.5), we indeed meant \\u201c...mean aging acceleration\\u2026\\u201d instead of \\u201cmedian,\\u201d as the applied Student\\u2019s t-test is used to determine whether the difference in means between two groups is significant. We will correct this typo in the revised version of the text.\\n\\nSecond, for tasks 3 and 4, the choice of the median is more appropriate due to its **robustness to outliers** compared to the mean. Thus, simplicity and robustness are the two primary reasons for favoring the median.\\n\\nThird, covariate shift, also referred to as a batch effect in bioinformatics, denotes the **shift between the distributions of covariates in two datasets**. For instance, the distribution of methylation values for a given CpG site could be centered around 0.45 in one dataset and around 0.55 in another\\u2014a common scenario in DNA methylation and other types of omics data. To evaluate the robustness of a given aging clock model to covariate shift (batch effect) between the original clock training dataset and datasets from the proposed benchmark, we introduced a **prediction bias task**. In this task, we calculate the median age acceleration, which reflects the **systematic shift in clock predictions caused by differences between datasets**. We will add explicit clarifications for the covariate shift in the revised version of the manuscript.\\n\\n**Q2: The rationale behind the summary benchmark score..**\\n\\nWhile designing our metric, we aimed for **simplicity and interpretability**. At the same time, we sought to include more data in the benchmark to address the data scarcity caused by the underrepresentation of certain AACs. In the simplest case, we could sum up the AA2 and AA1 scores; however, this approach would be unfair. **Clocks exhibiting a large systematic bias in their predictions might automatically perform better in the benchmark** due to their advantage in the AA1 task. Since we evaluate only aging-accelerating conditions, a positive systematic bias (where \\\"positive\\\" means that the predicted age acceleration tends to be statistically higher for healthy controls, whereas we expect it to be zero) should not be too large. **Such bias gives an unfair advantage to the model**. To account for this, we introduced a bracketed term in the BenchScore, which penalizes clocks with excessive systematic bias in their AA1 scores.\\n\\nAdditionally, we provided a **full decomposition of our metric** in the form of Figures 3E,F and in Table 1. This allows for a detailed examination of each clock's performance. As the authors of the first aggregating metrics, we hope this work sparks active discussion and contributes to the development of more advanced metrics in the future.\\n\\n**Q3: It appears that plots C and D in Figure 3 may be incorrectly presented. Plot D should likely represent Med(|\\u2206|) rather than Med(\\u2206), as all points are above the diagonal. Please clarify if this is a mislabeling or if I have misunderstood the data shown.**\\n\\nNo mislabeling here, but thank you for pointing to a potential issue with the readability of the figure. We will rewrite the caption in Fig 3C and D to be less confusing. The meaning of Chronological age prediction accuracy (Fig. 3C) is to measure the absolute error for each data sample (i.e. Med(|\\u2206|)). In Chronological age prediction bias (Fig. 3D), we **measure the shift of the overall prediction** (i.e., Med(\\u2206), as written in the paper) and it can be of negative or positive value. To better demonstrate the concept of \\u201cprediction bias\\u201d we sketched a limiting case in the Figure 3D **when all samples were predicted with a positive age acceleration**. This casts a strictly positive value of Med(\\u2206), which is graphically represented as a red arrow on the figure.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**\\\"Q1: ..better suited for a forum more specific to biological age..\\\"**\\n\\nAlthough we hear the concern, we respectfully argue that the biological age, as presented in our study, aligns closely with the goals of representation learning, because the community looks for an **interpretable univariate representation of complex biomarker data**. Biological age encapsulates a high-dimensional array of biomarkers into a **single, accessible latent variable**. This process is achieved by training models under specific assumptions that have evolved across the generations of aging clocks, similar to how representation learning frameworks have refined their assumptions to improve the quality and relevance of latent representations in other biologically-relevant tasks presented at ICLR previously [R1, R2, R3].\\n\\nFurthermore, the utility of biological age extends into numerous downstream applications, such as predicting all-cause mortality, estimating multi-morbidity risk, and evaluating general health status. These applications literally highlight its role as an **interpretable and practically valuable representation** with real-world impact [R4, R5]. In this way, biological age enriches traditional representation learning by adding interpretability and ease of use, bridging machine learning and healthcare.\\nRegarding the tasks proposed in our benchmark, we emphasize that the task selection is inspired by integrating the established practices of aging clock evaluation found in numerous studies. We employ a clear, interpretable approach for testing aging clocks, which is relevant to a broader research community in data and life sciences.\\n\\n**Q2: ..normalization..**\\n\\nDuring the data pre-processing step of our study, we were primarily oriented at **the most common use case of applying aging clocks** by anyone other than the clockmaker. That is, when a researcher collects and processes their data in a way they find the most appropriate (for data, not for the aging clocks they plan to use), and then apply an aging clock model **trained on other data, already pre-processed** by the clock authors. As there is no gold standard for DNA methylation pre-processing, **each research group carries out their own pre-processing that does not necessarily match the pre-processing pipeline used for training the clock model**. For example, this is almost always the case when older clock models, such as Horvath2013 or Hannum2013, are applied to recently acquired data.\\n\\nTherefore, so as to retain this typical workflow and not to put any clock model into advantage by choosing the same pre-processing that matches its own pipeline for every dataset, we decided to include already pre-processed datasets. In doing so, we also relied on two existing papers. First, compiling already pre-processed datasets without performing the same pre-processing for all of them was done in [R6], another notable effort in the aging clock community. Second, we were also encouraged by a recent paper by Varshavsky et al. [R7] who managed to **create an accurate clock model** by combining several blood datasets\\u2014without any additional normalization or correction procedure, **using already pre-processed data** from previous studies (some of which are included in our dataset as well), and thus demonstrating that the **between-dataset normalization is not critical for this type of data**.\\nWe discussed our reasoning in the Section 3.3 of our Benchmarking Methodology chapter and in the Appendix section A.9, and we thank you for pointing out the potential misunderstanding that we will clarify in the revised text.\\n\\n**Q3: A minor issue...**\\n\\nAdmittedly, we had considered removing this list in the Appendix, but eventually kept it in the Methodology section so that all the respective studies of DNA methylation profiling would be cited in the main references list. However, this issue is indeed minor, and we are open to replacing the references in the revised version.\\n\\n**References**\\n\\nR1 Marin, F. I., et al. Bend: Benchmarking dna language models on biologically meaningful tasks. // ICLR 2024\\n\\nR2 Pandeva T., Forr\\u00e9 P. Multi-View Independent Component Analysis for Omics Data Integration //2023 ICLR \\n\\nR3 Zhou Z. et al. Dnabert-2: Efficient foundation model and benchmark for multi-species genome //arXiv preprint arXiv:2306.15006. \\u2013 2023.\\n\\nR4 Pyrkov T. V. et al. Longitudinal analysis of blood markers reveals progressive loss of resilience and predicts human lifespan limit //Nature communications. \\u2013 2021. \\n\\nR5 Pierson E. et al. Inferring multidimensional rates of aging from cross-sectional data //The 22nd International Conference on Artificial Intelligence and Statistics. \\u2013 PMLR, 2019. \\n\\nR6 Ying, K., et al. (2023). A Unified Framework for Systematic Curation and Evaluation of Aging Biomarkers. bioRxiv, 2023-12.\\n\\nR7 Varshavsky, M., et al. (2023). Accurate age prediction from blood using a small set of DNA methylation sites and a cohort-based machine learning algorithm. Cell Reports Methods.\"}", "{\"comment\": \"We sincerely thank you for your additional reflection on our work and we hear your concern!\\n\\nFirst, the major contribution that we strived to make was not the dataset itself. It was an **explicit methodology of selecting the specific conditions and datasets**, based on clear, clinically relevant assumptions about how clocks should behave if we want them to be truly indicative of a person's biological age. The main problem with epigenetic aging clocks now, in our opinion, is that **there is no consensus way that they can be validated with using open-source data**, as biological age is a latent variable. By presenting our benchmark, we are, for the first time, providing researchers with a means of reliably comparing clock models, not just by how well they predict chronological age, but by how well they can perform in conditions that are proven to substantially decrease life expectancy. Our approach may not become consensus in the end, but, currently, **it is the only one that exists for omics data**, and we hope it will generate fruitful discussion in the field. \\n\\nSecond, while most aging clocks indeed rely on linear models, their training methodologies differ significantly, with first-generation clocks predicting chronological age and second-generation clocks predicting mortality using Cox proportional hazard models, requiring differing assumptions about the biological age *as a learnt representation*. \\n\\nAfter three years of curating in-human data for our unifying benchmark, we see a critical need for ML expertise to develop robust biological age predictors, and we are well aware of the expectations at traditional ML conferences. It is to bridge this gap that we created our fully open-access benchmarking approach and repository. We hope that the optimal longevity markers will emerge through collaboration between the biological and ML communities. \\n\\nLastly, please kindly note a trend in ICLR and other top conferences to accept similar works: \\n\\n-- Marin, et al. \\u201cBend: Benchmarking dna language models on biologically meaningful tasks\\u201d, **ICLR 2024**. \\n\\n-- Sihag, et al. \\u201cExplainable brain age prediction using covariance neural networks\\u201d, **NeurIPS 2023**.\\n\\n-- Pandeva T., Forr\\u00e9 P. \\u201cMulti-View Independent Component Analysis for Omics Data Integration\\u201d, **ICLR 2023**.\\n\\n-- Zhou Z. et al. \\u201cDNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genomes\\u201d, **ICLR 2024**.\\n\\n-- Weinberger, E., & Lee, S. I. \\u201cA deep generative model of single-cell methylomic data\\u201d, **NeurIPS 2023** (GenBio Workshop).\"}", "{\"comment\": \"Dear Reviewer,\\n\\nPlease kindly let us know if you are satisfied with our responses and if we can do anything else to support our work.\"}", "{\"summary\": \"The authors present a benchmark study where they contrast different computational methods, namely aging clocks, for inferring biological age from epigenetics (methylation) data. A corpus of datasets relevant for the benchmark was built through a systematic search, and it is provided as a resource. Finally, the evaluation was performed on four different tasks, devised in such a way to capture different aspects of aging clocks' performances.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The benchmark is well structured: (i) a variety of datasets and methods are included, and (ii) the tasks upon which the methods are evaluated are well defined and relevant for the domain. Furthermore, such type of benchmarks are quite timely, due to a continuously growing list of available aging clocks.\", \"weaknesses\": \"My main criticism is that the paper is only marginally relevant with respect to the topics of the conference. Inferring the biological age of an individual can hardly be considered as learning representations. The machine learning methods used for deriving aging clocks are very well known and established, thus lacking novelty. The tasks presented in the paper to assess the clocks' performances are not totally novel, as the authors themselves point out in section 2.2.\\n\\nFrom a technical point of view, an important aspect that the paper does not address is preprocessing. Several normalization methods exist for methylation data, and their impact to downstream analysis is well documented (see for example Teschendorff et al. 2013). A robust benchmark should try to evaluate the effect of different normalization methods on aging clock performances.\", \"a_minor_issue_the_authors_may_want_to_consider\": \"the long list of reference at page 6 could be placed in the appendix, to ease reading\\n\\nAndrew E. Teschendorff, Francesco Marabita, Matthias Lechner, Thomas Bartlett, Jesper Tegner, David Gomez-Cabrero, Stephan Beck, A beta-mixture quantile normalization method for correcting probe design bias in Illumina Infinium 450 k DNA methylation data, Bioinformatics, Volume 29, Issue 2, January 2013, Pages 189\\u2013196,\", \"questions\": [\"I would like to ask the authors to address the two main criticisms I listed in the \\\"weaknesses\\\" section:\", \"Overall, the opinion of this reviewer is that while the work has undoubtedly merit, it would be better suited for a forum more specific to biological age and aging clocks.\", \"Regarding the normalization of methylation data, I would invite the authors to at least discuss whether the preprocessing of the included datasets match the recommended preprocessing of each aging clock (if any).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I would like to thanks the authors for their effort in replying to my comments. Unfortunately, I still have concerns regarding this submission.\\n\\nFirst, I still consider this work only marginally relevant for the ICLR conference. I do appreciate the clarification offered by the authors regarding the latent nature of biological age. However, the major contribution of the paper remains its curated selection of methylation datasets suitable for age clocks' training / validation, which again I think would be better suitable for a more specialized journal or conference.\\n\\nRegarding data normalization, I disagree with the authors' statement \\\"to retain this typical workflow and not to put any clock model into advantage by choosing the same pre-processing that matches its own pipeline for every dataset\\\". The fact that the \\\"typical workflow\\\" seems to be \\\"preprocessing their own data while ignoring the preprocessing used during the creation of the clock\\\", does not mean that this workflow is correct. Rather, this shows that there is a need of evaluating the most suitable normalization approach for each age clock, so that researchers can make an informed choice when they pair a preprocessing algorithm and an age clock. I understand that such a onerous task might be outside of the scope of this paper, however I would urge the authors to elaborate more on this point, better underlying that the effect of preprocessing on biological age estimation is still an under-investigated topic.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**Q1: Will you make your benchmarking dataset publicly available?**\\n \\nYes, we will definitely make it fully available, and the links to both the dataset and the code required to reproduce all figures are already prepared, but are omitted in the anonymized version of this submission.\\n\\n**Q2: Can you please confirm that your evaluation tasks/metrics are original, and add citations if not?** & **W2: ..cumulative score requires more justification**\", \"we_write_in_the_article\": \"\\u201cTwo approaches we propose as essential tasks in our benchmark entail related prior art. For example, Porter et al. [41] and Mei et al. [34] used one-sample or two-sample aging acceleration tests for clock validation\\u201d, demonstrating that the AA2 (see [R1, R2]) and AA1 (see [R3]) tasks were well-established and actively utilized previously. Hence, constructing a score based on these approaches for clock validation was a natural choice. Furthermore, calculating the median absolute error (as in [R4, R5]) to measure the accuracy of chronological age prediction is also a well-known approach.\\n\\nThe reason a weighted sum of the four metrics is not applicable in our case is that only the AA2 and AA1 tasks are essential for testing clock validity, as outlined in the requirements provided in the Background section. The other two tasks serve as auxiliary measures, providing additional insights into the degree of clock calibration to chronological age.\\n\\nPlease kindly refer to our response to Reviewer PX5Y Q2 for additional details regarding the rationale behind the design of the BenchScore metric.\\n\\n**Q3: Can you make a case for why the paper is a strong fit for ICLR, despite not truly being in the representation learning space?**\\n\\nPlease kindly refer to our response to Reviewer 3vvy Q1, where we explain why the biological age is indeed a learnable representation. Think of the task of BA prediction that has no ground truth. It bears similarities with **unsupervised learning task** when the latent representation (BA) is obtained within a particular training procedure. For example, in the 1st generation clocks (or even unsupervised clocks), the BA cannot be verified with a ground truth \\u2013 and yet \\u2013 these clocks are still used in downstream clinical tasks. **Likewise, a significant portion of ICLR submissions is concerned with tasks that involve unknown targets, metrics, and data that lack annotation**. That is why we believe that the ML community is the best place where aging researchers could seek help from. \\n\\n**W1: ...weren't all re-trained on a standardized training dataset...**\\n\\nIndeed, the standardized training data are important, but often missing from clock studies, with the Biolearn effort [R1] being one of the few sources to provide it. Taking pre-trained models was exactly the point of doing our comparison, because **most published aging clock models employ the same architecture** (most often linear regression-based). **The outcomes are unique only thanks to their training data**. Therefore, re-training clock models on a single training dataset would create a completely novel clock that would have little in common with the published ones. Moreover, the **second-generation clocks rely on data combining mortality and DNAm values**, which, as noted in our manuscript, are either **unavailable or restricted** due to ethical concerns.\\n\\n**W3: Requires clarification: ...\\\"Clearly, the first task [AA1] provides a more rigorous way to test aging clocks [compared to AA2]\\\"...**\\n\\nThank you for highlighting a potential misunderstanding. We will clarify by mentioning the tasks explicitly: \\u201cClearly, the first task (AA2) provides a more rigorous way to test aging clocks (compared to AA1) \\u2026\\u201d. AA2 is more rigorous because it compares predictions between diseased and healthy patients, accounting for possible age acceleration due to batch effects, rather than comparing diseased patients to zero acceleration. This ensures that clocks systematically predicting accelerated ages for healthy subjects gain no advantage in AA2, but might appear successful in AA1. To address this, we penalize the AA1 score for prediction bias, which justifies the need for a cumulative benchmarking score.\", \"references\": \"R1 Mei X et al. Fail-tests of DNA methylation clocks, and development of a noise barometer for measuring epigenetic pressure of aging and disease. Aging (Albany NY), 2023.\\n\\nR2 Ying K et al. Causality-enriched epigenetic age uncouples damage and adaptation. Nature Aging, 2024.\\n\\nR3 Porter HL et al. (2021). Many chronological aging clocks can be found throughout the epigenome: Implications for quantifying biological aging. Aging cell, 20(11), e13492.\\n\\nR4 Horvath S (2013). DNA methylation age of human tissues and cell types. Genome biology, 14, 1-20.\\n\\nR5 Thompson MJ et al. A multi-tissue full lifespan epigenetic clock for mice. Aging (Albany NY), 10(10), 2832.\\n\\nR6 Ying K et al. (2023). A Unified Framework for Systematic Curation and Evaluation of Aging Biomarkers. bioRxiv, 2023-12.\"}", "{\"comment\": \"We thank you for elaborating your concerns!\\n\\nRespectfully, we argue that the foremost contribution of our work was not a collection of datasets. Given time and effort, the curation of such a stack is tedious, but trivial. Taken individually, the criteria for selecting conditions and datasets, and the tasks that we used are also not novel. But, more importantly, our work for the first time aggregates them all in a **coherent, rigorous, and clinically relevant methodology to compare epigenetic clock models** and, yes, we also provided the open-access datasets that are fit specifically for this methodology. \\n\\nCurrently, our proposal is **the only effort to define conditions that the clocks should be compared by** (in cases when epigenetic data combined with mortality is not publicly available, which is always, unfortunately). Even in the best cases, the authors of a clock paper choose their benchmarking datasets ad hoc (when a model is already built) and without openly clarifying why they limited themselves to that specific combination. This leads to a situation, in which we, as observers, **cannot verify whether their clocks can actually be trusted for estimating biological age**, because we cannot readily test them on any other severe condition which is expected to impact a person's biological age dramatically. By our work, we let any researcher validate any model on the largest collection of epigenetic data in life-shortening conditions.\\n\\nConcerning data processing, we would surely love to see that the newer datasets are always well-paired with older clock models, however this task is of lesser concern, because the main issue with aging clock nowadays is not lack of appropriate normalization. As we mentioned in the earlier reply, Varshavsky et al. (2023) had managed to create accurate clocks by combining several datasets without any processing at all. **More crucial and far less established** is the methodology of validating a clock's predictive ability to notice significant changes in patient's health, thus being a good indicator of biological age. Without such methodology, no processing can help. \\n\\nSimilarly to what we commented in an other reply to reviewer XgQd, we do not expect that our approach will become an immutable canon carved in stone. Clinically relevant methodology for validating *latent biomarkers of aging* is necessary, but it should arise from **a scientific discussion between clinicians, aging biologists, and those proficient at learning representations**. Therefore, by presenting it to the ML community, we strive to pave the road for the extended network of researchers to attract their attention to the task of modeling aging with epigenetic data and engage in this productive discussion.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thank you for taking the time to reply to the Reviewer comments and adding clarity. After reflecting on the paper and the authors' response to why it's a fit for ICLR, I maintain my position that while it is important work, it does not seem like a strong fit for ICLR. The models being compared are regression models, with the main difference between them being the data they were trained on. The major contribution of this paper is its benchmarking dataset (as the metrics are not novel), and while datasets & benchmarks papers are invited as ICLR submissions, this paper is really comparing how different training datasets affect performance rather than different machine learning models.\"}", "{\"summary\": \"This paper benchmarks 13 different published biological clock models using a standardized test dataset that they compiled from more than 50 different publicly available studies. While no ground truth data is available for biological age (as it is a latent factor) or for age at death (as this data often isn\\u2019t published), the authors offer 4 compelling metrics by which to score the models accuracy and robustness. This paper presents a resource to the community in terms of a newly published benchmarking dataset, well-motivated metrics, and ratings for the current state of the art clock models. The paper also appropriately outlines limitations, such as the fact that some datasets had poor performance across all models, raising questions about dataset shift and for what kinds of data the clocks can be expected to make sound predictions. I believe this paper will help generate scientific discussion and progress in the aging clocks research community.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"This paper is written very clearly, and did a great job walking the reader through the background to the problem, definitions of biological age, and different kinds of biological clock models. It\\u2019s graphics are informative, clear, and aesthetic. Truly a pleasure to read!\", \"Provides colab notebook for reproducibility\", \"I believe this paper will be significant to those in the biological clocks community. It is a benchmarking paper, so while it doesn't offer a new methodology itself, it does offer original tasks/metrics for assessing the performance of these models (I think they are original, I asked for clarification in the questions section) and a standardized benchmarking dataset (I asked for clarity to confirm it will in fact be published along with this paper)\"], \"weaknesses\": [\"I was disappointed that the clock models weren't all re-trained on a standardized training dataset. Without standardizing the training data, it is impossible to know whether the methodology of the clock or the training data it used are contributing to better/worse performance. This insight would be critical to the community in improving clock methodologies going forward.\", \"The way that the authors chose to combine benchmarks in the cumulative score requires more justification. I am not sure why the different metrics should affect each other's weights so much. A simple sum, or weighted sum, of the four variables might be more appropriate if stronger justification is not supplied.\", \"Requires clarification: on the one hand, authors write \\\"Clearly, the first task [AA1] provides a more rigorous way to test aging clocks [compared to AA2]\\\" on the other hand, they write \\\"The most rigorous of the four, AA2 task demonstrates...\\\"\", \"Your description of the biomarker paradox could be improved. When I first read your description, I was left with questions. I had trouble finding more info on the \\\"paradox of biomarkers\\\" using the papers you cited (possibly due to paywall issues, I couldn't see the full articles), but you might consider adding this reference _Sluiskes, Marije H., et al. \\\"Clarifying the biological and statistical assumptions of cross-sectional biological age predictors: an elaborate illustration using synthetic and real data.\\\" BMC Medical Research Methodology 24.1 (2024): 58._ as their explanation made me fully understand the problem, namely that \\\"a (bio)marker that perfectly correlates with chronological age is useless in estimating biological age... in principle a nearly perfect chronological age predictor can be developed, as long as the sample size is large enough [35]. In such a case all signal related to biological aging would be lost.\\\"\", \"More broadly, while I really enjoyed the paper, I am not sure it is a great fit for the ICLR community, as this model is a predictive regression model and not in the space of representation learning.\"], \"questions\": [\"Will you make your benchmarking dataset publicly available? Can you please add a link to it in your manuscript? I view this benchmarking dataset as a significant portion of your contribution in this work.\", \"Can you please confirm that your evaluation tasks/metrics are original, and add citations if not?\", \"Can you make a case for why the paper is a strong fit for ICLR, despite not truly being in the representation learning space?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Submission update by Authors\", \"comment\": \"Dear Chairs and Reviewers,\\n\\nWe have updated the submission PDF to address the points raised in the Reviews. Below is the list of changes we have introduced:\\n\\n1. In Benchmarking Methodology section 3.5, we have corrected the typo in the AA1 task description, replacing \\u201c**median** aging acceleration\\u201d with \\u201c**mean** aging acceleration\\u201d, as per our response to reviewer PX5Y.\\n\\n2. In Benchmarking Methodology section 3.5, we have added the explicit mentions of the tasks: \\u201cClearly, the first task (**AA2**) provides a more rigorous way to test aging clocks **compared to AA1**, because it helps to control potential covariate shifts, but the second task (**AA1**) deserves its place in the list, as it allows including more data into the panel to overcome data scarcity\\u201d, as per our response to reviewer XgQd.\\n\\n3. In Benchmarking Methodology section 3.5, we replaced the last paragraph describing the 4th task (prediction bias task) with the following to include a deeper explanation of covariate shift, as per our response to reviewer PX5Y: \\u201cWe introduced the fourth task, a prediction bias task, to evaluate the robustness of a given aging clock model to covariate shift between the original clock training dataset and the datasets from the proposed benchmark. Covariate shift, also referred to as batch effect in bioinformatics, denotes the shift between covariate distributions in two datasets. For instance, the distribution of methylation values for a given CpG site could be centered around 0.45 in one dataset and around 0.55 in the other one\\u2014a common scenario in DNAm and other omics data. Because each clock is trained on healthy controls, we expect age deviation of HC samples to be zero on average (*i.e.*, $E(\\\\Delta_{HC})=0$). In practice, however, due to the presence of a covariate shift between the training and testing data, a clock might produce biased predictions, resulting in a systemic bias and adding or subtracting extra years for a healthy individual coming from an external dataset. The goal of the fourth task is to control for such systemic bias in clock predictions. Therefore, as a benchmarking metric for this task, we calculated median aging acceleration ($Med(\\\\Delta)$) across HC samples from the entire dataset panel, which reflects the systematic shift in clock predictions caused by differences between datasets.\\u201d\\n\\n4. In Benchmarking Methodology section 3.6, we added the following sentences before the last paragraph, addressing our discussion with reviewer PX5Y: \\u201cWhile designing our metric, we aimed for simplicity and interpretability. At the same time, we sought to include more data in the benchmark to address data scarcity caused by the underrepresentation of certain AACs.\\u201d\\n\\n5. In the caption of Figure 3, we added the following, as per our response to reviewer PX5Y: \\u201c(C) illustrates that chronological age prediction accuracy is measured by median absolute error ($Med(|\\\\Delta|)$) across all predictions. For a limiting case of prediction bias sketched in (D), all samples were predicted with positive age acceleration, leading to a strictly positive value of $Med(\\\\Delta)$, graphically represented as a red arrow pointing to a cross.\\u201d\\n\\n6. In Appendix section A.9, we clarified the rationale behind not performing any additional data procesing and inter-dataset normalization, as per our response to reviewer 3vvy, writing the following: \\u201cAs there is no gold standard for DNAm pre-processing, each research group carries out their preferred pipeline that does not necessarily match the processing pipeline used for training the clock model, especially in case of applying earlier clocks (e.g., those by Hannum et al. (2013) or Horvath (2013)). Therefore, so as to retain this typical workflow and not to put any clock model into advantage by choosing the same processing that matches its own pipeline for every dataset, we did not perform any post-processing, inter-dataset normalization, or batch effect correction. In doing so, we also relied on two existing papers. First, compiling already pre-processed datasets without performing the same processing for all of them was done by Ying et al. (2023), another notable effort in the aging clock community. Second, we were also encouraged by a recent work by Varshavsky et al. (2023) who managed to create an accurate clock model by combining several blood datasets\\u2014without any additional normalization or correction procedure, using already pre-processed data from previous studies (some of which are included in our dataset as well), and thus demonstrating that the between-dataset normalization is not critical for this type of data.\\u201d\\n\\nBest regards,\\n\\nAuthors of Submission 7894\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for addressing all my questions. I do believe that this line of work will have an important impact on epigenetic clock research. Thus, I raise my score to 6 and I recommend this paper for acceptance. Best of luck!\"}", "{\"metareview\": [\"The paper presents a framework for evaluating 13 published pre-trained linear regression-based biological clock models. Because no ground truth exists for biological age (a univariate latent representation), the paper presents a scoring mechanism to evaluate them on four related tasks.\", \"**Strengths:**\", \"The paper is very well written and easy to follow.\", \"The compilation of the datasets is impressive and represents a thorough examination of how to compare epigenetic clock models.\", \"The authors have participated heavily in the discussion period with lengthy responses to reviewer concerns.\", \"**Weaknesses:**\", \"The consistent and fundamental concern is whether this work is a good fit for the ICLR community. Put another way, it is not obvious whether the researchers who work on learning representations would benefit from this work. The models compared are all essentially the same linear regression model that have been pre-trained on different datasets. Several reviewers pointed out this flaw and remain unconvinced with the author rebuttal.\", \"I want to acknowledge that the authors have made several arguments to defend the claim that the work is relevant to ICLR. They have stated that: a) biological age is inherently a learned representation lacking ground truth, b) they have cited several similar works published at comparable ML venues, and c) the key contribution of the work is not the compilation of datasets but instead \\\"validating a clock's predictive ability to notice significant changes in patient's health, thus being a good indicator of biological age\\\" (from author response to Reviewer 3vvy).\", \"After close examination of the paper, reviewer comments, and author responses, I remain unconvinced by a) and b). For a), if learning biological age is a relevant representation-learning task, it is unclear why other methods from the ICLR community are not used. The authors list out-of-distribution (OOD) detection, unsupervised, and self-supervised learning as relevant methods, but none of these are used in the linear-regression methods that are benchmarked. For b), the listed similar works (in response to Reviewer XgQd) are indeed ML-venue benchmark and methodological papers on aging models using -omics data. However, the listed works focus much more on novel architectures or benchmarking more sophisticated models such as language models or generative models.\", \"In response to c), the judgement about whether the work is clinically relevant is not the purview of ICLR. If this is indeed one of the main contributions of the work --- especially important in light of my thoughts on a) and b) --- then the work is better suited for a more relevant clinical venue.\", \"The fit with ICLR's focus on methods, datasets, and benchmarks for learning representations weighed the most in my decision.\"], \"additional_comments_on_reviewer_discussion\": \"The paper had a very active discussion, and I commend both reviewers and authors for being engaged, respectful, and constructive in their responses.\", \"summary_of_the_main_points_raised\": \"1) **Reviewers pointed out small typos and clarifications (e.g., mean vs median)**: Authors acknowledged and addressed these comments\\n 2) **Reviewers noted that the main contribution of the paper is the compilation of datasets**: Authors maintain that the key contribution is the development of the evaluation scheme through the identification of relevant conditions and combination of the tasks. Note that the datasets compiled are all public. Two of the four benchmark tasks are previously developed (with citations). One is a \\\"natural choice\\\" given the first two tasks. The last one is also \\\"well-known\\\" (no citation).\\n3) **Reviewers asked about details about the evaluation of the clock models**: Authors confirmed that the clock models largely use the same architecture (i.e., a linear regression) and that the only difference across models is the training data used. \\n\\nSee above for my thought process in making my final decision.\"}" ] }
0Ag8FQ5Rr3
The Super Weight in Large Language Models
[ "Mengxia Yu", "De Wang", "Colorado Reed", "Alvin Wan" ]
Recent works have shown a surprising result: a small fraction of Large Language Model (LLM) parameter outliers are disproportionately important to the quality of the model. LLMs contain billions of parameters, so these small fractions, such as 0.01%, translate to hundreds of thousands of parameters. In this work, we present an even more surprising finding: pruning as few as a single parameter can destroy an LLM’s ability to generate text—resulting in an increase in perplexity by three orders of magnitude and reducing zero-shot accuracy to guessing. We propose a data-free method for identifying such parameters, termed super weights, using a single forward pass through the model. Additionally, we find that these super weights induce correspondingly rare and large activation outliers, termed super activations. When preserved with high precision, super activations can enhance simple round-to-nearest quantization, making it competitive with state-of-the-art methods. For weight quantization, we similarly find that by preserving the super weight and clipping other weight outliers, round-to-nearest quantization can scale to much larger block sizes than previously considered. To facilitate further research into super weights, we provide an index of super weight coordinates for common, openly available LLMs.
[ "natural language processing" ]
Reject
https://openreview.net/pdf?id=0Ag8FQ5Rr3
https://openreview.net/forum?id=0Ag8FQ5Rr3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zHLN3tTHyx", "uHdCMWs7SL", "svgNXMGCm1", "sYtgi5cyQL", "q2e8i1ckuE", "iKzqeEhzD2", "dzBEmmCLFK", "dDsJqGz5Ha", "UVhEzsa98s", "LHSa5MZbgL", "JzUJFMWGqZ", "FoD8UGTqVA", "BRRQKlUdhg", "AHFqm7sJ6A", "A8a9HiVBwF", "9upDATEe8Z", "9tCo14UeDJ", "92gomx5hZI", "6L0xIEMYoN", "6KOIJUTr1h", "4D9YNT9eYK", "1NQzkdoaFW" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "comment", "official_review", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1733163814603, 1732242297101, 1733019170493, 1732612147158, 1732542658098, 1732931246786, 1732725133673, 1732689497528, 1732243836166, 1732245763617, 1732243448633, 1737523389818, 1733423180187, 1730699683896, 1732245607655, 1730485357609, 1734611697716, 1730847616181, 1732947960267, 1733164726494, 1730612113166, 1730659770818 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission310/Authors" ], [ "ICLR.cc/2025/Conference/Submission310/Authors" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_DNuv" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_72Gg" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_rhtc" ], [ "~Yuzong_Chen1" ], [ "ICLR.cc/2025/Conference/Submission310/Authors" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_zzW7" ], [ "ICLR.cc/2025/Conference/Submission310/Authors" ], [ "ICLR.cc/2025/Conference/Submission310/Authors" ], [ "ICLR.cc/2025/Conference/Submission310/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "~Xi_Wang4" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_agim" ], [ "ICLR.cc/2025/Conference/Submission310/Authors" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_DNuv" ], [ "ICLR.cc/2025/Conference/Submission310/Area_Chair_M6eu" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_zzW7" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_rhtc" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_agim" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_72Gg" ], [ "ICLR.cc/2025/Conference/Submission310/Reviewer_rhtc" ] ], "structured_content_str": [ "{\"comment\": \"Dear reviewer agim,\\n\\nSince the rebuttal deadline is approaching, we would like to check in about our response to your review. We would appreciate your thoughts on our clarifications and reconsideration on the rating.\\n\\nThanks!\"}", "{\"comment\": \"### 1. The novelty of discovering super weights and super activations.\", \"our_work_makes_two_key_novel_contributions\": \"1) We systematically identify and characterize a tiny set of \\\"super weights\\\" that are crucial for model performance (even a single weight). Prior studies noted weight outliers but did not isolate this essential subset or demonstrate their necessity.\\n2) We establish a causal relationship between super weights and super activations, showing that these extreme activation patterns emerge specifically from super weights. This advances beyond previous work that observed super activations without showing where they stem from.\\n\\n\\n### 2. Comparison to other quantization methods.\\nOur key finding is that super activation handling appears to be a critical mechanism underlying SOTA methods' success. By achieving comparable performance with a simpler method focused solely on super activations, we demonstrate that while SOTA methods effectively handle these activations, they may simultaneously be processing many unnecessary outliers.\\nThis insight has important practical implications. It suggests that quantization methods could potentially be simplified while maintaining effectiveness by focusing specifically on super activations rather than broad outlier management. This aligns with a key goal in LLM quantization research: handling as few outliers as possible while maintaining model quality.\\nWe believe our work makes a valuable scientific contribution by revealing a fundamental mechanism behind SOTA methods' success and demonstrating it through a minimal working example. These insights provide clear design principles for developing simpler, more efficient quantization approaches in future research.\\n\\n### 3. In Figure 6, do the authors have any insights into the concave behavior of the scaling factor? Are there specific explanations or potential methods for identifying this optimal scaling factor?\\nThank you for this insightful observation about the concave behavior. We believe the scaling curve's shape can be explained by considering the dynamics of model training, particularly in later stages. As pre-training progresses and the language modeling loss becomes small, the weight decay term begins to dominate the loss function. This regularization effect actively encourages weights to maintain smaller magnitudes, potentially preventing super weights from naturally reaching their optimal scale during training.\", \"this_perspective_suggests_an_interesting_future_direction\": \"using a small calibration dataset to \\\"finetune\\\" just the super weights, allowing them to reach their optimal scale without the constraints imposed by weight decay during pre-training. This targeted approach could help identify more optimal scaling factors while maintaining the model's learned representations. We will update the manuscript to further clarify this point.\\n\\n\\n### 4. Regarding the stop word shift in distribution, is it generally accepted that a higher probability of stop words negatively impacts LLM performance?\\n\\nThe relationship between stop word probabilities and LLM performance is context-dependent. In tasks like LAMBADA, where the goal is predicting meaningful target words, our evidence suggests that higher stop word probabilities indeed negatively impact LLM performance. This manifests in two ways:\\nIn next-word prediction tasks like LAMBADA (predicting a passage's final word), increased stop word probability is resulted from the lower probability of meaningful target words across samples.\\nOur case study (lines 258-262) demonstrates that removing super weights causes the target word probability to plummet from 81.4% to under 9%, indicating severe degradation of language modeling capability.\"}", "{\"title\": \"Repsonse\", \"comment\": \"I've read authors response. Firstly, I'm sorry about the delay. My question is answered and experiment in Section 3.2 makes sense. However only ~2/3 of my concerns are addressed.\\n\\n1) input space is, in a way, just another activation. I don't think the difference between adverserial images and activation outliers are as big as it may seem. I think authors should make this connection clear in their work and cite relevant work. Is there, for example, a paper which looks in to effect of \\\"weights\\\" on creating adversarial examples. I think there are few which shows sparse networks are more robust. \\n\\n2) There are 2 things one do when identifying the super-weight if I recall correctly (1) finding the layer with activation jump (2) identification of super weight by looking at input/output. What I am saying is one can do (1) for the baseline and only do magnitude pruning in corresponding layer. This would show it is not the least magnitude weight. Not you don't search for super weight globally, so it is unfair for the baseline to do that in a away. This could have been done during rebuttal easily, but I guess there was a miscommunication.\\n\\n3) yes for the weights. Please do. No one really does RTN for quantization. Clipping is a common technique.\\n\\nI will increase my score but I'm still in the middle for this work. It think it is interesting, but some key baselines are missing.\"}", "{\"comment\": \"Thanks to the authors for their response. However, my primary concerns regarding performance improvement over SOTA and hardware efficiency remain unaddressed. I still believe that to make the paper accepted, there should be an improvement either in the perplexity or hardware efficiency to show the value of the findings. Therefore, I will keep my score unchanged.\"}", "{\"comment\": \"Thank you for your response. Unfortunately, I believe the paper has critical issues that render it not yet suitable for acceptance at ICLR. Many of these concerns were outlined in my initial review, which highlighted significant weaknesses across multiple aspects of the work.\\n\\nI have carefully reviewed your response multiple times. However, it did not directly address any key questions or resolve the primary concerns raised in my review. The fundamental issues span across the setting, problem formulation, questions posed, and the methodology, among other aspects, as detailed in my comments.\\n\\nI rarely provide such a strong rating, but in this case, I find it difficult to see the value in the current form of the paper. For these reasons, I must maintain my current rating.\"}", "{\"title\": \"Potential misleading reviews generated by LLM\", \"comment\": \"I appreciate the authors' effort in writing a good paper and Reviewer rhtc for providing the review. However, it seems that this review doesn't point out the real weakness of the paper (most Weakness points are about minor paper writing issues which have been addressed by the authors). The last paragraph of the review, \\\"after several readings\\\" are confusing since the reviewer mentioned additional concerns but didn't list them out. The text structure is potentially obtained from LLMs like ChatGPT. I suggest the area chair to look into this review to provide a more fair assessment of the paper.\"}", "{\"comment\": \"> there should be an improvement either in the perplexity or hardware efficiency to show the value of the findings.\\n\\nWe respectfully disagree that the main value of research is strictly on obtaining SOTA on benchmarks. We ask the reviewer to reconsider their stance when reviewing papers (even if not for this paper) that the main value of research is through SOTA benchmarks: this reviewing strategy discourages publishing an intriguing and previously unreported phenomenon (e.g. that even a *single* weight alteration destroys a multibillion parameter neural network).\\n\\nThe ICLR reviewing guidelines share this stance when reviewing a paper, asking reviewers the following https://iclr.cc/Conferences/2025/ReviewerGuide\\n> What is the significance of the work? Does it contribute new knowledge and sufficient value to the community? Note, this does not necessarily require state-of-the-art results. \\n\\nThis work does contribute new knowledge and sufficient value to our community even though it does not necessarily present state-of-the-art results.\"}", "{\"comment\": \"I thank the authors for their careful response and appreciate their efforts. I can now more clearly see the impact and noverlty of the work and method I believe the results are important for a better understanding of LLM functionality, irrespective of the practical implications on quantization. I have decided to raise my score to a 6.\"}", "{\"comment\": \"### 1. Regarding the necessity of \\\"super weights\\\":\\nOur research demonstrates that there is a crucial distinction between general outliers and what we term \\\"super weights.\\\" Our key finding is that not all outliers carry equal importance, and their importance is not determined by magnitude. Specifically, we show that super weights, despite not being the largest weights in the network, have disproportionate impact compared to the other 7,000+ outliers combined. This novel observation challenges the conventional understanding of outlier importance in LLMs.\\n\\n\\n### 2. Regarding the \\\"Prune SW, +SA\\\" setting: \\nOur experiments show that pruning super weights (SW) naturally leads to a significant decrease in super activation (SA) magnitudes, shown in Figure 4. The \\\"Prune SW, +SA\\\" experiment was specifically designed to investigate whether super weights' influence is limited to their direct effect on super activations (on a single token) or if they also impact other tokens through different pathways. We have updated the manuscript to make this distinction clearer.\\n\\n### Regarding Figure 2:\\nThe figure illustrates the propagation mechanism of super weights' influence through the network. Specifically, it shows how a super weight in an early layer generates super activation, which then propagates through skip connections to subsequent layers, ultimately affecting token probabilities. \\n\\n### Others\\nMinor points on terminology and notation, we have updated the manuscript to:\\n- Standardize our mathematical notation (e.g., Y_{ij}, X_{ik}, W_{jk})\\n- Enhance our figures and tables to better convey our findings\\n\\nRegarding your statement \\u201cI recommend a strong reject based on the quality of this paper and will not change my rate.\\u201d We find such firm-willed sentiment in a scientific review panel concerning, and we encourage the reviewer to read the ICLR reviewing guidelines\\n\\u201cIf you believe that a paper has flaws in terms of its evaluation or validation, proofs, or other parts of the discussion, it is critical to point this out to the authors. The authors will have an opportunity to address these concerns, and the iterative process of improving papers after reviewer feedback is important for ensuring the highest quality of ICLR papers.\\u201d\\n\\nThis reviewing process is designed to be iterative, where both reviewers and authors have the opportunity to discuss their questions, concerns, and suggestions for improvement.\"}", "{\"comment\": \"Thank you for highlighting our novel findings on super weights and their practical implications for LLM quantization - we're particularly glad you found the methodology clear and appreciate your recognition of how this work could benefit future research in the field.\\n\\n### 1. Connection to Adversarial Examples:\\nThis is a good point. We agree that there is a similarity between super weights and adversarial examples, however, there is also a fundamental difference between them. We will add the following discussion to the manuscript:\\n\\nThe connection between Super Weights and adversarial examples lies in how both phenomena demonstrate neural networks' sensitivity to small changes. In adversarial examples, minor input perturbations can drastically alter model outputs. Similarly, modifying Super Weights can significantly impact model quality. This sensitivity suggests that neural networks develop highly specialized pathways during training that can be vulnerable to targeted changes.\\nHowever, there is a fundamental distinction between these phenomena. Adversarial examples are input-specific \\u2013 they exploit vulnerabilities in how the model processes particular inputs. In contrast, Super Weights represent core structural elements whose importance persists across all inputs. While adversarial examples reveal weaknesses in input processing, Super Weights illuminate fundamental aspects of neural network architecture and computation. This input-agnostic nature of Super Weights suggests they represent essential computational primitives that emerge during training, rather than specific vulnerabilities that can be exploited.\\n\\n### 2. Magnitude Pruning Baseline\\nWhen we did global magnitude pruning, we set thresholds for each weight tensor to identify outliers, meaning it captures the most significant weights in any given layer already. The fact that removing super weights degrades performance more than removing these globally-identified outliers - including those within the same layer - provides even stronger evidence for their unique importance.\\n\\n### 3. Quantization Baseline with Clipping\\nWe would like to clarify that for activation quantization, we did not apply clipping in both baseline and proposed methods. Are you suggesting incorporating clipping in the RTN baseline in weight quantization? We will be happy to provide the enhanced baseline results, if that's the case.\\n\\nWe would also like to clarify that we use samples from the train set from Wikitext-2 to determine the clipping threshold, while evaluating the models on the test set, which are independent of the train set.\\n\\n### 4. Clarification on Prune SW,+SA setting.\\nLet's take Llama-7B as an example. In the mlp.down_proj layer, the super weight is the [3968, 7003] element in weight matrix W [4096, 11008]. For a 10-token input, the hidden states X [10, 11008] transform through down_proj to X' = XW^T [10, 4096]. The super weight affects the 3968th channel across all tokens, while a super activation typically appears on just one token.\\nThe \\\"Prune SW+SA\\\" condition removes both the super weight and any associated super activations to investigate whether SW's impact on model quality operates:\\n- Primarily through the single-token SA\\n- Also through broader effects across other input tokens\"}", "{\"comment\": \"Thank your for acknowledging the importance of the discovery of super weights.\\n\\n### 1. The improvements of proposed methods with existing baselines are quite marginal.\\nWe would like to clarify that our study's primary objective is not to outperform SOTA methods, but rather to explain why they work effectively.\\n\\nOur key finding is that super activation handling appears to be a critical mechanism underlying SOTA methods' success. By achieving comparable performance with a simpler method focused solely on super activations, we demonstrate that while SOTA methods effectively handle these activations, they may simultaneously be processing many unnecessary outliers.\\n\\nThis insight has important practical implications and is of broad interest to the research community. It suggests that quantization methods could potentially be simplified while maintaining effectiveness by focusing specifically on super activations rather than imprecise outlier management. This aligns with a key goal in LLM quantization research: *handling as few outliers as possible while maintaining model quality*.\\n\\nWhile our method does not exceed SOTA performance, we believe our work makes a valuable scientific contribution by revealing a fundamental mechanism behind SOTA methods' success and demonstrating it through a minimal working example. These insights provide clear design principles for developing simpler, more efficient quantization approaches in future research.\\n\\n### 2. How super weights are formed during training.\\nA comprehensive analysis of training dynamics is planned for future work; we can share some preliminary findings based on our analysis of OLMo-1b training checkpoints.\\nWe traced the evolution of super weight magnitudes across training steps (see plots [here](https://imgur.com/a/fwfdN45)), with each curve representing an individual super weight. Our observations revealed two distinct phases that we will further study in future work:\\n1. Underfitting Phase (0-100k steps):\\n- Super weights exhibit rapid magnitude growth\\n- Their magnitudes surpass other outlier weights significantly\\n2. Overfitting Phase (100k-700k steps):\\n- Super weight magnitudes gradually decrease\\n- This decline is likely attributable to the weight decay mechanism\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Comment on author response\", \"comment\": \"This is a very interesting paper! But I have a small question about the author's response:\\n\\n> As pre-training progresses and the language modeling loss becomes small, the weight decay term begins to dominate the loss function.\\n\\nNotice that this is true for Adam + L2 regularization, which is rarely used in practice and LLM pre-training. However, I think this is much less likely to be true for AdamW, where an explicit, derivable coefficient for the L2 regularization does not exist.\\n\\nWith that said, I still agree with the authors that weight decay could be playing some very important role in super weights related observations!\"}", "{\"summary\": \"The paper is about the discovery of super weights in LLMs that are disproportionately important, pruning these hurts model quality quite a bit. The authors have provided a way to identify these super weights using a forward pass. Super weights are activations are sensitive to quantization effects and hence authors propose a super weight aware quantization method enabling effective quantization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Novel discovery about the importance of a few handful of neurons: The identification and analysis of super weights and super activations as critical outliers and their positive influence on model's performance is noteworthy and interesting.\", \"quantization_proposals\": \"Authors went one step further to propose a super weight-aware quantization method to make the best use of these super weights/activations. Data free quantization proposal with on par performance compared to SmoothQuant is also a worthy contribution.\", \"weaknesses\": \"Though the discovery is quite interesting, the improvements of proposed methods with existing baselines are quite marginal. In general, such kind of super weights might be a natural phenomenon in any machine learning model. How can one say this is relevant only to LLM's?\\n\\nThe work seems to be very much based on empirical observations (which is not my concern) but more discussions/intuitions/explanations around how/why these super weights are formed will be useful.\", \"questions\": \"The paper mostly focuses on post training model weight/activation analysis and identifies certain handful of importance weights/activations. The authors also say that irrespective of the input prompt the super weights are always the same and they mostly occur in the early layer's down projection with some reasoning via skip connections diagram.\\n\\nThough these insights are helpful, but it would be good if authors can follow up with what happens during the training process that such super weights are formed in the first place. Does the training methodology in terms of quantization during training/layernorm, gradient scaling, etc play any role in the forming of these super weights?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for acknowledging the discovery of super weights and super activations.\\n\\n### W1. Failed to improve SOTA\\nOur study's primary objective is not to outperform SOTA methods, but rather to explain why they work effectively.\\nOur key finding is that super activation handling appears to be a critical mechanism underlying SOTA methods' success. By achieving comparable performance with a simpler method focused solely on super activations, we demonstrate that while SOTA methods effectively handle these activations, they may simultaneously be processing many unnecessary outliers.\\nThis insight has important practical implications. It suggests that quantization methods could potentially be simplified while maintaining effectiveness by focusing specifically on super activations rather than broad outlier management. This aligns with a key goal in LLM quantization research: handling as few outliers as possible while maintaining model quality.\\nWhile our method does not exceed SOTA performance, we believe our work makes a valuable scientific contribution by revealing a fundamental mechanism behind SOTA methods' success and demonstrating it through a minimal working example. These insights provide clear design principles for developing simpler, more efficient quantization approaches in future research.\\n\\n### W2: Hardware experiments\\nIn this work, our primary focus was demonstrating the impact of super outliers on quantization. We agree that hardware performance metrics would be valuable in practical applications (but such measurements depend strongly on the hardware chosen), and such analysis does not impact the key observation of this paper: that super outliers exist. \\n\\n### Q1: For equation 1, the median is used to replace super activation. Is getting the median time-consuming since GPU is not good at sorting?\\nWe initially used the median value as a simple placeholder, since the value will be immediately replaced before computation with the actual super outlier. A better choice, would be to use any inlier value, since again, this is a simple placeholder. To optimize computational efficiency, we updated our manuscript to specify that instead of a median value, we can use the first element of the tensor, as this requires O(1) time complexity compared to O(n log n) for median calculation. This modification will not affect the final results while significantly reducing computational overhead. We will update the manuscript to reflect this more efficient approach and clarify the rationale behind the placeholder selection.\\n\\n### Q2: Run SmoothQuant on more models (line 407 - line 409). \\nWe will add these comparisons to the final manuscript.\"}", "{\"summary\": \"This paper introduces the concept of \\\"super weights\\\" in Large Language Models (LLMs), identifying a small number of individual weight parameters (as few as one) that have a disproportionately large impact on model performance. Pruning these super weights drastically reduces the quality of generated text, while pruning thousands of other larger-magnitude outliers has a negligible effect. The paper proposes a data-free method for identifying super weights based on their connection to \\\"super activations,\\\" exceptionally large activation outliers previously observed in LLMs. Finally, the paper demonstrates that preserving super weights and activations during quantization significantly improves compression quality, achieving competitive results methods like SmoothQuant.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The identification of \\\"super weights\\\" and their connection to super activations represents a novel and potentially significant finding in understanding the inner workings of LLMs.\", \"Connection of \\\"super weights\\\" to quantization accuracy is quite interesting and has practical implications.\", \"The paper provides a clear methodology for identifying super weights and evaluating their impact, along with an index of super weight coordinates for common LLMs, facilitating further research.\"], \"weaknesses\": [\"# Major\", \"Connection to Adversarial Examples: The literature extensively documents how small changes in the input domain can drastically alter output probabilities. Consequently, significantly harming the network by removing weights, as demonstrated, is somewhat expected. A discussion addressing the connection between super weight removal and adversarial examples would strengthen the paper.\", \"Magnitude Pruning Baseline: In Table 1, the comparison of super weight pruning with global magnitude pruning may not be the most informative. A stronger baseline would involve pruning only within the layer where super activations occur. This would better isolate the impact of the super weight itself.\", \"Quantization Baseline: The \\\"Naive W8A8\\\" quantization baseline should incorporate clipping. The current presentation makes it unclear whether the observed improvements stem from outlier removal or clipping, especially since super weight handling affects only a single layer during quantization, while clipping is applied to every layer. Furthermore, it should be noted that the clipping threshold is determined using Wikitext-2, which is also included in the evaluation of quantized models.\", \"# Minor\", \"Terminology: The term \\\"extreme\\\" might be more descriptive and informative than \\\"super\\\" when referring to these weights.\", \"Weight Distribution Visualization: Including a histogram visualizing the position of the super weight within the overall weight distribution would enhance understanding of its magnitude relative to other weights.\"], \"questions\": [\"Section 3.2, \\\"Prune SW+SA\\\": The description of the \\\"Prune SW+SA\\\" condition in Section 3.2 is unclear. Specifically, how does this condition differ from the original model? I understand that super activations typically precede super weights in the model. Therefore, I am unsure what modification is being made in \\\"Prune SW+SA\\\" and how it distinguishes itself from the original, unpruned model. Could you please elaborate on this procedure?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper explores the impact of superweights\\u2014defined as weights with larger magnitudes\\u2014on the performance of large language models (LLMs). The authors analyze the influence of these superweights on LLMs\\u2019 performance and propose specialized quantization methods tailored to superweights. The experimental results show some performance improvements achieved by the method, particularly when employing larger block sizes within the network.\\n\\nHowever, three reviewers have raised notable concerns regarding the limited experimental improvements, the lack of clarity in the experiments, and also lack of deep analysis. Furthermore, one reviewer has questioned the definition and conceptual clarity of the superweight concept. As a result, the current version of the paper does not appear ready for publication. We recommend that the authors thoroughly address these issues in alignment with the reviewers' feedback.\", \"additional_comments_on_reviewer_discussion\": \"Here I list the main issues:\\n\\n1.\\tChallenging the novelty of this work (Reviewer zzW7):\\nThe authors have adequately addressed this concern, leading to a positive score from the reviewer.\\n\\n2.\\tMarginal experimental improvement (Reviewers agim and 72Gg):\\nThe authors did not provide additional experiments but explained that their findings are intriguing. However, both reviewers remain unsatisfied with the response.\\n\\n3.\\tUnclear experimental explanations (Reviewers zzW7, rhtc, DNuv):\\nThe authors addressed the concerns point by point, resolving many issues raised by the reviewers.\\n\\n4.\\tEmpirical observations lacking discussion/intuition on superweight formation (Reviewer agim):\\nThe authors presented experimental findings but did not offer in-depth explanations. The reviewer found the response not so sufficient.\\n\\n5.\\tHardware efficiency (Reviewer 72Gg):\\nThe authors did not provide additional experiments, and the reviewer believes the claims of hardware efficiency are overstated.\"}", "{\"summary\": \"This paper focuses on the impact of outlier weights in large language models (LLMs), specifically larger weights, which the authors term superweights and superactivations. First, the authors analyze how much these weights and activations affect LLM performance. They then use this as motivation to discuss quantization methods designed to account for superweights and superactivations. Throughout the paper, the authors also discuss the impact of superweight scaling and provide experimental results showing how their quantization method improves upon standard rounding, especially when using larger block sizes within the network.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and effectively illustrates the importance of superweights and superactivations. I appreciate the discussion on the percolation of superactivations across the network and the identification of superweights across layers (Figure 3). Additionally, I find the potential implications of superweight upscaling presented in Figure 6 quite interesting.\", \"weaknesses\": \"While I appreciate the analysis presented in this paper, I am struggling to see the novelty of this work. I may be misunderstanding, but from what I gather, superweights and superactivations have already been discussed in prior analyses of LLMs. Additionally, it seems that methods like AWQ and SqueezeLLM inherently focus on superactivations. Furthermore, compared to other weight quantization techniques, the proposed method does not appear to offer significant improvements.\", \"questions\": \"1. Could the authors provide clarification on the points I raised in the weaknesses section, especially if I may have misunderstood some of the contributions?\\n\\n2. In Figure 6, do the authors have any insights into the concave behavior of the scaling factor? Are there specific explanations or potential methods for identifying this optimal scaling factor?\\n\\n3. Regarding the stop word shift in distribution, is it generally accepted that a higher probability of stop words negatively impacts LLM performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responding to this public comment\", \"comment\": \"- However, it seems that this review doesn't point out the real weakness of the paper (most Weakness points are about minor paper writing issues which have been addressed by the authors).\\n\\nThis assessment is completely inaccurate. The main problem is not mere grammatical errors, but a fundamental lack of clarity, inadequate explanations, and even contradictory definitions, charts, and explanations. Those things require very substantial improvement, even rethinking the motivation and the foundational assumptions of this work. I believe my review provided detailed evidence with specific examples and data to substantiate each critique.\\n\\n- The last paragraph of the review, \\\"after several readings\\\" are confusing since the reviewer mentioned additional concerns but didn't list them out.\\n\\nThe phrase \\\"after several careful readings\\\" was used because I have already identified 9 significant issues in my first few readings, and additional problems persist throughout the paper. However, continuing to list each one would not alter the fact that the paper's overall quality falls significantly below the expected standard. Crucial concerns, such as the protocol for identifying outliers and the subsequent application of super weights, are not discussed at all. This should have been addressed at the outset. Throughout the paper, the term 'weights' seems to have three different meanings: weights, activations, and a combination of weights and activations. This level of confusion extends far beyond mere \\\"minor writing issues.\\\"\\n\\nAs for the comment on the text structure being derived from LLMs like ChatGPT, I clarify that while ChatGPT was used to check for grammatical consistency, the content of the reviews was written independently by me.\\n\\nSince this is a public discussion, I expect future critiques to be based on substantial, measurable claims. I am not inclined to continue this conversation otherwise.\"}", "{\"comment\": \"Thank you for the responses to my questions. It does sound like answers to my questions are mostly part of planned future work. I have read other reviewers comments and discussions as well and at this point I would like to keep my rating purely because submitting a more comprehensive paper with additional understanding and improvements will make it a stronger contribution to the community in the next iteration.\"}", "{\"summary\": \"This paper reveals that Large Language Models (LLMs) contain a very small subset of weights\\n(super weights) that are extremely important, where removing them severely degrades model\\nperformance. The researchers developed an efficient, data-free method to identify these super\\nweights using only a single forward pass. They further investigated how these super weights\\ninfluence network behavior by analyzing their relationship with activation outliers. Building on\\nthese insights, they proposed a quantization approach that carefully preserves these super\\nweights while effectively compressing other weights, resulting in the maintenance of model\\nquality after compression.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The discovery is interesting and the proposed quantization method is easy to implement, which\\ncan maintain better performance compared to Round to nearest quantization with the same\\nblock size.\", \"weaknesses\": \"The authors failed to show how the proposed methods can improve the SOTA.\\n1. Although the method is data-free, its performance does not exceed SOTA methods like\\nSmoothQuant, given incorporating a small calibration dataset would not increase the\\nquantization complexity much.\\n2. The author mentions that this method is hardware-friendly, but no experiments to show\\nits effectiveness in improving latency, throughput, memory usage, etc.\", \"questions\": \"1. For equation 1, the median is used to replace super activation. Is getting the median\\ntime-consuming since GPU is not good at sorting? (Although there are GPU-version\\nsorting algorithms)\\n2. The authors mentioned that SmoothQuant does not report on some models this paper\\nevaluates, they compare our results with naive W8A8 quantization (line 407 - line 409).\\nCan the authors run SmoothQuant on these methods since it is open-source? The naive\\nW8A8 is a too-weak baseline.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates the sensitivity of a subset of outliers in LLMs, referring to them as \\\"super weights.\\\" The authors conducted experiments to examine the impact of these super weights on model performance.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The authors conducted experimental explorations on the so-called \\\"super weights.\\\"\", \"weaknesses\": \"1. The necessity of \\\"super weights\\\" is unclear, as outliers are already identified based on the threshold. Increasing the threshold will naturally reduce the number of outliers with very large weights. Given the known importance of outliers in LLMs, emphasizing \\\"super weights\\\" (outliers at a higher threshold) does not appear novel.\\n\\n2. Figure 1 is misleading. According to the author's definition, \\\"super weights\\\" are a subset of outliers. However, the figure suggests -1.9 is a typical outlier with nearby values being quite small (.1 and .2), implying that zeroing out outliers produces nonsensical text\\u2014a widely acknowledged fact. To better demonstrate the significance of super weights, it would be beneficial to explore whether zeroing out all outliers results in poor performance, and similarly, whether zeroing out just a small subset (e.g., 20-30) leads to comparably severe degradation.\\n\\n3. Table 1 raises critical concerns. First, the criterion for selecting outliers needs specification. Second, the \\\"Prune SW, +SA\\\" setting in Lines 146-152 is confusing, as it suggests pruning super weights while partially restoring super activations enhances quality. However, the authors did not prune activations, leading to confusion about this claim.\\n\\n4. Table 2 appears redundant and fails to convey meaningful information. Replacing it with visual representations of \\\"super weights\\\" distributions would be more informative, as the current table occupies considerable space without offering clear insights.\\n\\n5. Figure 2 is difficult to interpret. The depiction of super weights and their impact, such as generating nonsensical text, is not clear. The use of the same color block in both the network and the output is puzzling. Are the model's dynamics linear? How do the output and weights share the same significance? Clarification is needed on whether this figure is based on assumptions or empirical data.\\n\\n6. In Lines 189-190, the term \\\"super activations\\\" is introduced but lacks clarity on whether it is threshold-based or aligns with corresponding weights, which could be time-consuming. The authors should clarify this terminology.\\n\\n7. The paper contains several unprofessional notations. For example, \\\"Yij\\\" should be corrected to \\\"Y_{ij}\\\" in Line 204, and similarly, \\\"Xik\\\" and \\\"Wjk\\\" should be \\\"X_{ik}\\\" and \\\"W_{jk}\\\" in Line 205. The inconsistency in notation and dimensions between \\\"d\\\" and \\\"D\\\" in Line 204 suggests a lack of careful writing and review, raising concerns about the overall professionalism of the paper.\\n\\n8. Lines 198-210, which discuss the identification of super weights, are crucial yet unclear. The selection criteria for super weights remain ambiguous and need a precise mathematical description. Readers should understand the definition of outliers and the criteria for their selection explicitly.\\n\\n9. The paper lacks consistency in terminology. \\\"Super weights\\\" sometimes refer to both activations and weights, and at other times only to weights, adding confusion. In Line 306, the term \\\"super outliers\\\" is introduced, suggesting that the paper should maintain consistent terminology from the start, including in the title, if both weights and activations are discussed.\\n\\nAfter several careful readings, there are numerous additional concerns throughout the paper. The issues are substantial and critical, making it unlikely to meet the standards of ICLR. I recommend a strong reject based on the quality of this paper and will not change my rate.\", \"questions\": \"Please refer to the weaknesses section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0ASCZrVzSX
Blessing of Dimensionality for Approximating Sobolev Classes on Manifolds
[ "Hong Ye Tan", "Subhadip Mukherjee", "Junqi Tang", "Carola-Bibiane Schönlieb" ]
The manifold hypothesis says that natural high-dimensional data lie on or around a low-dimensional manifold. The recent success of statistical and learning-based methods in very high dimensions empirically supports this hypothesis, suggesting that typical worst-case analysis does not provide practical guarantees. A natural step for analysis is thus to assume the manifold hypothesis and derive bounds that are independent of any ambient dimensions that the data may be embedded in. Theoretical implications in this direction have recently been explored in terms of generalization of ReLU networks and convergence of Langevin methods. In this work, we consider optimal uniform approximations with functions of finite statistical complexity. While upper bounds on uniform approximation exist in the literature in terms of ReLU network approximation, we consider the opposite: lower bounds to quantify the fundamental difficulty of approximation on manifolds. In particular, we demonstrate that the statistical complexity required to approximate a class of bounded Sobolev functions on a compact manifold is bounded from below, and moreover that this bound is dependent only on the intrinsic properties of the manifold, such as curvature, volume, and injectivity radius.
[ "approximation theory", "manifold hypothesis", "statistical complexity", "Riemannian geometry" ]
Reject
https://openreview.net/pdf?id=0ASCZrVzSX
https://openreview.net/forum?id=0ASCZrVzSX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yYA8MoHnjF", "x99Vi3RDub", "uVX8uvjgic", "kXQ14TF4oE", "ji7k2uHy7F", "ixrJiKQMZy", "b7WKmjrNbT", "b58SCaPmwu", "GyUcv8VvAu", "ExYWaVocVW", "EOlOhV7I3O", "C0fqvyw2to", "0YPAVKRVVW" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_review" ], "note_created": [ 1732670614462, 1732227904369, 1730707456495, 1732227955523, 1732227845930, 1732616990346, 1730662517171, 1732230252782, 1732227633796, 1737523435035, 1733173537360, 1735156220571, 1730479254511 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1089/Reviewer_StSY" ], [ "ICLR.cc/2025/Conference/Submission1089/Authors" ], [ "ICLR.cc/2025/Conference/Submission1089/Reviewer_vHxw" ], [ "ICLR.cc/2025/Conference/Submission1089/Authors" ], [ "ICLR.cc/2025/Conference/Submission1089/Authors" ], [ "ICLR.cc/2025/Conference/Submission1089/Authors" ], [ "ICLR.cc/2025/Conference/Submission1089/Reviewer_qm9Z" ], [ "ICLR.cc/2025/Conference/Submission1089/Reviewer_vHxw" ], [ "ICLR.cc/2025/Conference/Submission1089/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1089/Reviewer_qm9Z" ], [ "ICLR.cc/2025/Conference/Submission1089/Area_Chair_s8Xk" ], [ "ICLR.cc/2025/Conference/Submission1089/Reviewer_StSY" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the clarifications. I have decided to leave my score as is, since I feel the paper is somewhat limited in applicability.\"}", "{\"title\": \"Response to Reviewer qm9Z continued\", \"comment\": \"> line 758: The authors say \\\"By maximality, balls of radius $2\\\\epsilon$ at the $p_i$ cover $M$\\\" but why is this true? The manifold can potentially be very narrow.\\n\\nSuppose balls of radius $2\\\\epsilon$ did not cover $M$. Then there exists a point $p$ of $M$ which is of distance at least $2\\\\epsilon$ away from all the $p_i$. Then, open balls of radius $\\\\epsilon$ centered at the points $\\\\{p_1,...,p_{N_\\\\epsilon},p\\\\}$ are all disjoint, contradicting maximality of the packing by definition.\\n\\n> line 691: What is the notation $\\\\mathcal{A}(z, 16/\\\\sqrt{\\\\textrm{length}(z)}$?\\n\\nThere are some typos in Definition A.1 which have now been fixed. $\\\\mathcal{A}(z, \\\\epsilon)$ takes a set of samples $z$ and an error bound $\\\\epsilon$, and produces an approximate solution to the empirical risk/sample error minimization problem. Proposition A.3 can be interpreted as the learning algorithm also requiring the approximate empirical risk/sample error minimizing oracle $\\\\mathcal{A}$ to be increasingly precise as the sample size increases. These have now been fixed in the revised version.\\n\\n> Should the empirical risk in (56) be scaled by the sample size?\\n\\nYes, fixed.\"}", "{\"summary\": \"This paper studies the complexity of Sobolev function class on Riemannian manifolds. Specifically, the paper derives lower bound of the approximation error of a Sobolev ball by a smaller class with complexity bounded as pseudo-dimension. By constructing explicitly functions of bounded Sobolev norm that are separated in $L^1$, the paper connects the packing number of the manifold with a hard-to-learn subclass in the Sobolev ball, thus forcing a larger error/width. The main theorem claims a lower bound that only depends on intrinsic quantities of the manifold.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper has a natural motivation, and the concluded rate seems matching that of classical Euclidean case. The presentation is lucid, and the proof sketch and extended discussion is well written. Overall this paper is a solid contribution on the topic of manifold learning.\", \"weaknesses\": \"1. The result is not too surprising on the 1,p-Sobolev class, and considering that higher Sobolev space can even be an RKHS [1], one would expect major improvement on the rate. Also using volume comparison to control the packing number is rather standard, and one might further ask if the same technique is applicable to metric measure spaces or RCD spaces, though I understand this technicality may not be particularly befitting of this venue.\\n\\n\\n2. Typos: \\nDefinition 2.7 is defining packing number not covering number, and also metric entropy is not commonly defined this way. This version of metric entropy is exactly the same as packing number, hence (7) is not needed. (The key proposition C.1 is correct.) $CPf_a$ out side the balls on line 375 is not necessarily 0.\\n\\n[1] De Vito, Ernesto, Nicole M\\u00fccke, and Lorenzo Rosasco. \\\"Reproducing kernel Hilbert spaces on manifolds: Sobolev and diffusion spaces.\\\" Analysis and Applications 19.03 (2021): 363-396.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer StSY\", \"comment\": \"We thank the reviewer for the kind review. Please find point-by-point responses to the concerns and comments below.\\n\\n> It's not clear to me how applicable these results are in practice. Even when data satisfies the manifold assumption, the intrinsic dimension d may be quite large. It is not clear how large the authors think d is in practice, and how large a d would make these results applicable vs vacuous. For MNIST, for example, it's often given that d is between 7 and 14, depending on the digit. One can assume d is much larger for more challenging problems, maybe 20-40? In this case, the error bound $1/n^{1/d}$ is vacuous, unless the number of data points n is astronomically large (e.g., if d=20 we need $10^{20}$ data points!).\\n\\nThis work is intended as an extension of classical theory into the manifold hypothesis setting. Existing approximation results are equally vacuous for standard imaging examples, and moreover, estimates of intrinsic dimension vary wildly between methods. However, from the independence of ambient dimension, practical guidance could include minimizing the intrinsic dimensionality by reducing the number of uninformative variates. Our results quantitatively show that such variates increase the pseudodimension required to uniformly approximate the Sobolev ball with an exponential factor. \\n\\nWe would like to clarify that in Theorem 3.1, $n$ refers to the pseudo-dimension with which we are allowed to approximate the Sobolev class. As a concrete example, pseudodimension of ReLU neural networks scales as $\\\\Omega(W \\\\log W)$, where $W$ is the number of parameters, see for example Theorem 3 in [Bartlett2019]. The dependence on data points comes afterwards, using e.g. Proposition 2.3, albeit giving a sample complexity that is linear in the allowed pseudodimension. Towards the gap between the theoretical dataset requirements and practical machine learning, this could be because the Sobolev class is still a very descriptive class of functions. Rates for more structured classes of functions could be cause for future work.\\n\\n> I don't understand why the main result, Theorem 3.1, is measuring nonlinear n-width between the Sobolev space $W^{1,p}$ and the Lebesgue space $L^q$. It's really unclear to me why this implies anything about the difficulty in approximating Sobolev space functions. I'd like to see this clarified.\\n\\nThe main result concerns the approximation of the unit Sobolev ball using function classes of finite pseudo-dimension, where the approximation distance is measured in terms of maximum $L^q$ distance. Since the unit Sobolev ball has infinite pseudo-dimension, approximation with finite complexity classes is necessary for computation. We provide a lower bound for how much complexity is required to ($L^q$-uniformly) approximate functions in the unit Sobolev ball, which can be informally translated as how complicated the Sobolev ball is.\\n\\n### References\\n[Bartlett2019] Bartlett, P. L., Harvey, N., Liaw, C., \\\\& Mehrabian, A. (2019). Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks. Journal of Machine Learning Research, 20(63), 1-17.\"}", "{\"title\": \"Response to Reviewer qm9Z\", \"comment\": \"We thank the reviewer for the detailed review. The authors would be happy to add additional background into the appendix if the reviewer believes that it would aid the readability of the paper. The typos have been fixed in the revised paper. Please additionally find below point-by-point responses to the reviewer's concerns.\\n\\n> Not only is the result of Theorem 1 independent of ambient dimension $D$, the ambient dimension does not appear anywhere in the estimates. This is somewhat odd because the abstract mentions the manifold hypothesis which concerns both $D$ and $d$. In some similar approximation bounds, there is typically some dependence on $D$. The authors should address this.\\n\\nExisting approximation results typically consider explicit constructions to construct upper bounds on the statistical complexity. Such constructions generally come in terms of neural networks, in which the dependence on the ambient dimension comes naturally as a function of the output dimension. In this work, we consider instead an abstract manifold without any extrinsic structure as given by (isometric) embeddings into $\\\\mathbb{R}^d$. As such, there is no dependence on the ambient dimension $D$ in our results.\\n\\n> In a similar vein, the authors do not present a connection between the sample complexity mentioned in Proposition 2.3 and the main Theorem 1, as far as I can see. The assumed connection is that, due to this property of classes with finite pseudo-dimension $\\\\mathcal{H}_n$, the Sobolev class can also be estimated with the sample complexity given in (2), once the approximating class $\\\\mathcal{H}_n$ is determined, it can be estimated with this sample complexity. This connection should be made somewhere.\\n\\nIndeed, our result provides a worst-case lower bound on the approximation error, assuming that the true solution lies in an area that is not well approximated by $\\\\mathcal{H}_n$. There is a tradeoff between the approximation between the risk minimizers in the Sobolev ball and in $\\\\mathcal{H}_n$, as well as the generalization error incurred within $\\\\mathcal{H}_n$, with the former decreasing and latter increasing as the allowed pseudo-dimension $n$ increases. Our result then characterizes the first tradeoff, while the second is given by classical generalization bounds/statistical complexity results. We add a short section to discuss this, left to the appendix due to space limitations in the main text.\\n\\n> The main structure of the proof is almost identical to (Maiorov and Ratsaby, 1999), except the construction of the $L^1$-separated set of functions, due to the domain being a manifold. There are a few questions about the extended lower bound, which I ask below in the \\\"questions\\\" section.\\n\\nWe address the reviewer's questions below. We note that in addition to the generalization from the unit hypercube $[0,1]^D$ to a general (compact separable Riemannian) manifold without boundary, our result also gives the exact dimension dependence by giving explicit constants, further extending (Maiorov and Ratsaby, 1999).\\n\\n> The Theorem 1 lower bound (12) has a dependence on $p$, unlike in the Euclidean case (Maiorov and Ratsaby, 1999). Is there a plausible reason for this, i.e. is $p$ there for an inherent reason?\\n\\nThe dependence arises from the fact that the volume of a manifold is not identically 1, as the Euclidean case considers only the unit hypercube $[0,1]^d$. The dependence on $p$ comes from an application of H\\\\\\\"older's inequality to bound the constructions in their respective spaces, and are always attached to a $\\\\mathrm{vol}(M)$.\\n\\n> Why is Theorem 1 restricted to the case $K<0$? What is different about the positive curvature case?\\n\\nThe estimate will be tighter in the positive curvature case, as volume estimates will be tighter. We present the result for the case where $K<0$ since this is more general. The proof can be easily modified for $K>0$ by changing Equation (40) to the corresponding form for elliptic space, namely replacing $\\\\sinh(\\\\sqrt{|K|t})$ with $\\\\sin(\\\\sqrt{Kt})$ (for $\\\\rho < \\\\pi \\\\sqrt{|K|}$) to derive the same result, but with a different requirement for $r$ in Equation (51).\\n\\n> line 535: it is mentioned that the cutoff functions with bounds on higher derivatives is difficult to construct, but I am having trouble seeing why this should be so. Can the authors explain further?\\n\\nWhile intuitively simple in the Euclidean case, going further than one derivative introduces Riemann curvature terms that must be individually controlled (corresponding to curvature when computing cross derivatives), such as in Equation (8). The authors are unaware of similar constructions in the literature that explicitly uniformly control the higher-order covariant derivatives. While a uniform bound on the Riemann curvature tensor $g^{ij}$ would allow for this, we consider only bounded Ricci curvature as it is more standard in the literature.\"}", "{\"comment\": \"We thank the reviewers for their kind and helpful comments on our theoretical work. We have revised the paper, fixing minor typos and adding a short discussion into the appendix as to how our bounds fit into error decomposition, marked in blue. As the pdf revision deadline is approaching, we would be happy to incorporate any additional small changes that the reviewers may find helpful for our work, as well as answer any outstanding theoretical or conceptual queries. Thank you.\"}", "{\"summary\": \"The paper concerns a lower bound for a nonlinear width (involving the pseudo-dimension, a generalized version of the VC dimension) of Sobolev classes over smooth manifolds of dimension $d$. The authors claim that while the manifold can be embedded in a higher dimensional space with dimesion $D \\\\gg d$ the width of the Sobolev class has a lower bound that depends only on the dimension $d$ of the manifold.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"For a technical paper, the presentation is approachable and is self-contained (modulo some typos and missing definitions). The paper extends the lower bound proved in (Maiorov and Ratsaby, 1999) to manifolds.\", \"weaknesses\": [\"Not only is the result of Theorem 1 independent of ambient dimension $D$, the ambient dimension does not appear _anywhere_ in the estimates. This is somewhat odd because the abstract mentions the manifold hypothesis which concerns both $D$ and $d$. In some similar approximation bounds, there is typically some dependence on $D$. The authors should address this.\", \"In a similar vein, the authors do not present a connection between the sample complexity mentioned in Proposition 2.3 and the main Theorem 1, as far as I can see. The assumed connection is that, due to this property of classes with finite pseudo-dimension $\\\\mathcal{H}_n$, the Sobolev class can also be estimated with the sample complexity given in (2), once the approximating class $\\\\mathcal{H}_n$ is determined, it can be estimated with this sample complexity. This connection should be made somewhere.\", \"The main structure of the proof is almost identical to (Maiorov and Ratsaby, 1999), except the construction of the $L^1$-separated set of functions, due to the domain being a manifold. There are a few questions about the extended lower bound, which I ask below in the \\\"questions\\\" section.\"], \"questions\": [\"The Theorem 1 lower bound (12) has a dependence on $p$, unlike in the Euclidean case (Maiorov and Ratsaby, 1999). Is there a plausible reason for this, i.e. is $p$ there for an inherent reason?\", \"Why is Theorem 1 restricted to the case $K < 0$? What is different about the positive curvature case?\", \"line 535: it is mentioned that the cutoff functions with bounds on higher derivatives is difficult to construct, but I am having trouble seeing why this should be so. Can the authors explain further?\", \"line 758: The authors say \\\"By maximality, balls of radius $2\\\\epsilon$ at the $p_i$ cover $M$\\\" but why is this true? The manifold can potentially be very narrow.\", \"line 691: What is the notation $\\\\mathcal{A}(z, 16 / \\\\sqrt{\\\\text{length}(z)})$?\", \"Should the empirical risk in (56) be scaled by the sample size?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for the response, which addressed my questions on generalizations to several directions. I thus decide to leave the score unchanged, and raise my confidence to 4.\"}", "{\"title\": \"Response to Reviewer vHxw\", \"comment\": \"We thank the reviewer for the kind review. Please find point-by-point responses to the reviewer's concerns.\\n\\n> The result is not too surprising on the 1,p-Sobolev class, and considering that higher Sobolev space can even be an RKHS [1], one would expect major improvement on the rate. Also using volume comparison to control the packing number is rather standard, and one might further ask if the same technique is applicable to metric measure spaces or RCD spaces, though I understand this technicality may not be particularly befitting of this venue.\\n\\nIndeed, we recover rates as given by Euclidean intuition. We note that since we derive rates in negatively curved space, the rates are slightly worse than Euclidean. Conversely, in positively curved space, one may be able to derive better constants using more precise bounds when using Bishop--Gromov. Asymptotically however, one should expect the same rates as the Euclidean case, as the neighborhoods given by packings informally become flat. \\n \\nFaster rates require higher regularity of $W^{k,p}$, which change the lower bound from $n^{-1/D}$ to $n^{-k/D}$ in the Euclidean case. We note that construction of functions with appropriately bounded higher derivatives is more difficult than the Euclidean case due to the presence of curvature, and the authors are not aware of a result similar to [Azagra2007]. While the authors believe this is possible, this could introduce additional superexponential dependence on $d$, and require a uniform bound on the Riemann curvature tensor instead of only the Ricci curvature. \\n \\nWe speculate from a brief look at [Sturm2006] that the result may still hold in RCD spaces given appropriate analogs of Bishop--Gromov, as well as a uniform control on the measure of small balls (Prop. 3.3). However, the authors are not familiar with this line of work.\\n\\n> Typos: Definition 2.7 is defining packing number not covering number, and also metric entropy is not commonly defined this way. This version of metric entropy is exactly the same as packing number, hence (7) is not needed. (The key proposition C.1 is correct.) $\\\\mathcal{C}Pf_a$ out side the balls on line 375 is not necessarily 0.\\n\\nChanged the incorrect naming from covering number to packing number (Def. 2.7). We believe there are separate communities that define metric entropy differently (namely log of the covering number). We have replaced \\\"metric entropy\\\" with \\\"$\\\\epsilon$-metric entropy\\\" in Definition 2.7 and Lemma C.2, as there does not seem to be an alternative name for the cardinality of the largest $\\\\epsilon$-separated subset. \\n \\nBy definition in Equation (29), $\\\\mathcal{C}$ clamps any function to 0 outside the balls $B_r(p_i)$.\\n\\n### References\\n[Sturm2006] Sturm, Karl-Theodor. ``A curvature-dimension condition for metric measure spaces.\\\" Comptes Rendus Mathematique 342.3 (2006): 197-200.\\n\\n[Azagra2007] Daniel Azagra, Juan Ferrera, Fernando Lopez-Mesas, and Yenny Rangel. Smooth approximation of Lipschitz functions on Riemannian manifolds. Journal of Mathematical Analysis and Applications, 326(2):1370\\u20131378, 2007.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for the answers. I will keep the current score.\"}", "{\"metareview\": \"The paper studies the complexity of Sobolev spaces on manifolds. Its main result is a lower bound on the error incurred by approximating Sobolev functions in Lq with subspaces of functions learnable from n samples; the bound scales as roughly n^{-1/d} where d is the dimension of the manifold. Here, the constant depends on d, K (a lower bound on curvature), vol(M), p and q. The proof extends an argument of Maiorov and Ratsaby on functions on [0,1]^d by constructing well separated collections of functions on M \\u2014 roughly, these are W^{1,p} functions localized to a packing set of geodesic balls. The main modification of the argument is to control the effect of curvature (the main difference vis-a-vis the hypercube). The implication of this result for learning is to provide a lower bound for learning (intrinsic) Sobolev functions which depends only on intrinsic quantities.\\n\\nThis is a technically solid paper, which provides lower bounds on the sample complexity of learning on Riemannian manifolds. The results here are complementary to existing upper bounds for extrinsic learners (such as ReLU networks) on submanifolds. The paper provides lower bounds which are independent of embedding \\u2014 depending only on the intrinsic properties of M [of course, the sample complexity of extrinsic learners may depend on embedding]. The main concerns pertained to (A) the significance of the technical innovations in the paper, and (B) the practical implications of this setting and results for learning in moderate d.\", \"additional_comments_on_reviewer_discussion\": \"The main points of discussion included\\n(A) the possibility of extending the results to higher order Sobolev spaces, improving the rate from n^{-1/d} to n^{-k/d}.\\n(B) the implication of the type of approximation developed here for learning \\u2014 in particular, \\n - the implication on extrinsic learners such as neural networks\\n - the relationship between nonlinear n-widths in Lq and learnability \\nThe paper provides lower bounds which are independent of the embedding. As the authors and reviewers both note results on higher order Sobolev spaces may have stronger implications for learnability (since the nonparametric rate of n^{-1/d} is quite slow for d moderate).\"}", "{\"summary\": \"This paper is focused on the manifold assumption in machine learning. The goal is to further shed light on how the intrinsic dimension of the data manifold enters into notions of complexity in function approximation. In particular, the authors prove lower bounds on the complexity of approximating Sobolev space functions on manifolds, and show that the lower bounds, which are essentially 1/n^(1/d), depend only on the intrinsic dimension d of the manifold, and not the ambient dimension of the space the manifold lies in. The authors use a notion of pseudodimension that is an extension of VC-dimension and measure complexity by the nonlinear n-width.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The question of why deep neural networks work well on extremely high dimensional data is an important problem and the manifold hypothesis may be a good way to explain this. Work on this problem is important and valuable in machine learning. The problem of lower bounds on complexity is not studied as often as upper bounds. The results appear to be new and non-trivial.\", \"weaknesses\": \"It's not clear to me how applicable these results are in practice. Even when data satisfies the manifold assumption, the intrinsic dimension d may be quite large. It is not clear how large the authors think d is in practice, and how large a d would make these results applicable vs vacuous. For MNIST, for example, it's often given that d is between 7 and 14, depending on the digit. One can assume d is much larger for more challenging problems, maybe 20-40? In this case, the error bound 1/n^(1/d) is vacuous, unless the number of data points n is astronomically large (e.g., if d=20 we need 10^(20) data points!).\", \"questions\": \"I don't understand why the main result, Theorem 3.1, is measuring nonlinear n-width between the Sobolev space W^{1,p} and the Lebesgue space L^q. It's really unclear to me why this implies anything about the difficulty in approximating Sobolev space functions. I'd like to see this clarified.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
0AHkdAtFW8
Sum-of-Squares Programming for Ma-Trudinger-Wang Regularity of Optimal Transport Maps
[ "Sachin Shivakumar", "Georgiy Antonovich Bondar", "Gabriel Khan", "Abhishek Halder" ]
For a given ground cost, approximating the Monge optimal transport map that pushes forward a given probability measure onto another has become a staple in several modern machine learning algorithms. The fourth-order Ma-Trudinger-Wang (MTW) tensor associated with this ground cost function provides a notion of curvature in optimal transport. The non-negativity of this tensor plays a crucial role for establishing continuity for the Monge optimal transport map. It is, however, generally difficult to analytically verify this condition for any given ground cost. To expand the class of cost functions for which MTW non-negativity can be verified, we propose a provably correct computational approach which provides certificates of non-negativity for the MTW tensor using Sum-of-Squares (SOS) programming. We further show that our SOS technique can also be used to compute an inner approximation of the region where MTW non-negativity holds. We apply our proposed SOS programming method to several practical ground cost functions to approximate the regions of regularity of their corresponding optimal transport maps.
[ "Optimal transport", "sum-of-squares programming", "Ma-Trudinger-Wang tensor" ]
Reject
https://openreview.net/pdf?id=0AHkdAtFW8
https://openreview.net/forum?id=0AHkdAtFW8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zrQA4iaape", "zgTnYB4BxW", "zU0L7hkand", "zM6f9vqo6n", "yU90trY7Qy", "yQ1kUdpbCb", "vaVXv9UhkW", "qGJourt5zS", "k0YBZC2nxF", "hai6nddpr1", "fxgbn1qIlD", "exIqEfXISb", "WgGyOYsYx8", "Nqb8CwvByv", "NnQFbJOWZ6", "L4bSOqsmWq", "IDYjvTkT8K", "I6ruudHohj", "H3E4CEe95u", "F8ZCO0HKl8", "EtQ2qXlI30", "CthQxKvC8l", "0kfhrMcXuu" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732412676314, 1732225694017, 1732232433748, 1730699935502, 1732227963313, 1732216162234, 1732215114174, 1730306365172, 1732222147976, 1732442729172, 1729125722049, 1733287942595, 1730484690340, 1732405925295, 1732216814851, 1732225491526, 1730700105730, 1734458170502, 1732561225358, 1732237573839, 1737523531989, 1732218980382, 1733287789548 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2778/Reviewer_o9bV" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Reviewer_o9bV" ], [ "ICLR.cc/2025/Conference/Submission2778/Reviewer_AeCx" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Reviewer_fQTp" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Reviewer_XnX2" ], [ "ICLR.cc/2025/Conference/Submission2778/Reviewer_o9bV" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Reviewer_XnX2" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Reviewer_P2kY" ], [ "ICLR.cc/2025/Conference/Submission2778/Area_Chair_U96f" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Reviewer_AeCx" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ], [ "ICLR.cc/2025/Conference/Submission2778/Authors" ] ], "structured_content_str": [ "{\"comment\": \"The reviewer would have to apologize to the author that the reviewer has to maintain the score. The current manuscript is more suitable for more theoretical venues, for example in applied probability. The crux of the matter is the assumptions given by the work, combined with the computational scaling, are too restrictive for practical usage.\\n\\nFor OT applications in practice, it is quite well-known that the typical cost function one would pick would be either the squared Euclidean cost $c(x, y) = ||x - y||^2$ or the EMD cost $c(x, y) = ||x - y||$. For the squared euclidean cost, the regularity is already proven. For the EMD cost, it is quite clear that the assumption A1 doesn't hold. While the authors can perhaps argue that the function admits some rational approximation, the fact remains that it is only just an approximation, and one won't be able to answer regularity questions in a rigorous fashion. If the authors indeed go through this route of a rigorous application of the SOS approach to EMD, then perhaps this work would have been fine in ICLR and the reviewer would support the publication. But the fact of the matter is that this work doesn't address the typical cases in which OT regularity is concerned. \\n\\nIn addition, generically, the fundamental theorem of LP says that the optimal transport solution has an almost unique identification, in the sense that a point can at most on average couple with two other points. So what this means is that the regularity and uniqueness in the practical discrete OT case is almost always true. Therefore, the regularity would almost always happen in practice. In practice, non-uniqueness is not a big issue for OT-based ML applications, and the added paragraph in the paper isn't strong enough, as unregularized OT is almost never used due to its daunting computational complexity.\\n\\nHaving that as the context, the main contribution of the work is a computational tool to check the regularity that could be efficient in low-dimensions and for quite esoteric rational cost functions. This limitation is quite daunting. If someone presents the cost function of $c(x, y) = || x- y||^2 - ||x-y||^4$, the reaction of a typical researcher would be to just conclude that the lack of convexity means a lack of regularity globally, and a fine-grained approach for figuring out where the regularity fails to hold is very synthetic. \\n\\nTherefore, as much as the reviewer appreciates the work, the argument for the applicability of this work in practice is quite limited. As much as the reviewer appreciates this work, increasing the score beyond the passing threshold would be against the reviewer's professional judgment.\"}", "{\"title\": \"Response to Reviewer fQTp: making the writing accessible\", \"comment\": [\"**On making the writing accessible**\", \"In the revised manuscript, we have made several edits and additions with examples to improve the quality of exposition. These include\", \"a new Appendix A detailing the ideas related to nonnegative and SOS polynomials, Archimedean sets. This new Appendix A includes several examples to illustrate the progression of ideas, and is referred inline in the main body in Section 2.2,\", \"an example of semialgebraic set right after its definition in the main body in Section 2.2,\", \"a new paragraph after assumption **A1** in the main body in Section 1, better explaining why that assumption is benign.\"]}", "{\"title\": \"Reply to the author\", \"comment\": \"The reviewer thanks the author for the detailed answer. The reviewer is satisfied with the response.\\n\\nThe $O(n^{9d_N/4})$ cost seems quite large even in the case where $ n = 2$, and the reviewer is not sure if this scaling is suitable for this particular machine learning conference. The reviewer will keep the score on this ground, but the reviewer would strongly encourage the author to submit to more suitable venues that focus more on the theoretical complexities of machine learning and less on practical solutions to existing problems.\"}", "{\"summary\": \"The paper presents a sum-of-squares programming based approach to verifying the Ma-Trudinger-Wang (MTW) condition. Precisely, both the forward problem of identifying if a given cost function and domains satisfy the MTW condition and the inverse problem of finding the largest semialgebraic domain on which the MTW condition holds are considered. The corresponding problems can be solved via standard SOS solvers on modest hardware. The paper concludes with a numerical study which validates the theoretical findings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"To my knowledge, this is the first paper which explores the question of numerically verifying the MTW condition. The paper is overall written well, and the theoretical details look correct.\", \"weaknesses\": \"The limitation of the paper regards the assumption that the cost function is rational or that the elements of the MTW tensor are rational. I believe it would be useful to provide general examples of when it holds/does not hold in the text to further clarify how strong/weak the assumption really is.\\n\\nI believe, however, that the implications of this work are not quite fully fleshed out.\", \"questions\": \"My main question pertains to how the authors see this work fitting within the broader optimal transport literature. In effect, though some results require regularity of the optimal transport map to hold, these results typically pertain to questions of statistical estimation. In these settings, population measures are estimated based on samples and so (i) absolute continuity cannot be verified a priori, (ii) upper and lower bounds for the density cannot be verified, and (iii) the support of the distributions are unknown. It is thus unclear to me how the content of the current paper fits within the previous context.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer o9bV\", \"comment\": \"**Response to weaknesses:**\\n- (**On gap between non-negative and SOS polynomials**) Thanks for pointing this out. *In the revised manuscript, we have clarified the details regarding this gap in the new Appendix A. Specifically, in Appendix A.3, we now explain the SOS formulation over Archimedean semialgebraic sets* which is what causes the equivalence between (7) and (8). \\n- (**On index notation**) We agree. *To improve the clarity of index notation in the revised manuscript, we have added all the related notations $c_{ij,kl}$, $c_{ij,k}$ and $c_{j,kl}$ in the same row in Table 1*. We follow this convention for the partial derivatives of $c$ to be consistent with the literature on the MTW tensor, initiated by the original paper [1].\\n- (**On Theorem 5**) Thank you for noticing and highlighting this ambiguity. Indeed, as the reviewer pointed out, the quantity in equation (13) is a matrix-valued polynomial. We have revised the manuscript to use a different notation for matrix-valued SOS. Now, we use $ \\\\displaystyle\\\\sum_{\\\\rm SOS}^n$ to represent $n\\\\times n$ matrix-valued SOS constraints. This new notation is added in Table 1. There we also pointed out the notational reduction: $ \\\\displaystyle\\\\sum_{\\\\rm SOS}^1 = \\\\displaystyle\\\\sum_{\\\\rm SOS}$. *To help the readers, the new Appendix A.1 and A.2 in the revised manuscript explain these ideas in a systematic way with accompanying examples*.\\n- (**On contraction**) This contraction is known as the *pseudo-scalar product* (see Definition 2.1 of [2] for its formulation for the square-distance cost). For more general costs, this can be formalized by the Kim-McCann pseudo-Riemannian framework of optimal transport (see Definition 2.3 of [3]). To improve the readability of the paper, we avoided stating our results in terms of the pseudo-Riemannian formulation, instead we implicitly used the ambient Euclidean geometry in order to evaluate the pseudo-scalar products. *To address the reviewer's comments, we have explained this in the revised manuscript in the paragraph preceding Definition 3*.\\n- (**On minor comment**) We appreciate the reviewer's careful reading. *In the revised manuscript, we have fixed this issue in equation (12)*.\\n\\n**Response to questions:**\\n- (**On computational complexity**) *Following the reviewer's suggestions, in the revised manuscript, we included a computational complexity analysis for the forward NNCC/MTW problems in the new Appendix D*. We derived the scaling w.r.t. the dimension $n$ and the number of semialgebraic constraints $\\\\ell$. In summary, the SOS worst-case complexity is polynomial in $n$ (see Appendix D for details) and sub-quadratic in $\\\\ell$. *As per the reviewer's suggestion, in Sec. 4.2, we also reported the runtimes for the numerical examples 3 and 4 solving the inverse problem*.\\n- (**On regularity of Monge and Brenier OT**) Brenier's theorem states that for sufficiently regular measures, the Monge OT map for the Euclidean squared-distance cost exists and admits polar factorization that Brenier studied. So for sufficiently regular measures, these two maps are the same. This map is point to point as well as Borel (i.e., the preimage of open sets are Borel sets) which is necessary for the pushforward of measures to be well-defined. In general, when Brenier's polar factorization holds, then the Monge OT map is the $c$-subdifferential of a $c$-convex potential. The original MTW paper derives a $C^2$ estimate on this potential (which induces a $C^1$ estimate on the transport). This implies that the Jacobian equation is uniformly elliptic, so higher regularity (e.g., differentiability of the transport) follows using standard elliptic bootstrapping. In other words, for smooth costs/measures the optimal OT map and the potential will both either be infinitely differentiable or else there will be a set of measure zero where the potential is non-differentiable and the transport is discontinuous (see Section 4.5 of [4] for the latter case).\\n\\n[1] X.-N. Ma, N. S. Trudinger, and X.-J. Wang, \\u201cRegularity of potential functions of the optimal transportation problem,\\u201d Archive for rational mechanics and analysis, vol. 177, pp. 151\\u2013183, 2005.\\n\\n[2] A. Figalli, L. Rifford, and C. Villani, \\u201cNearly round spheres look convex,\\u201d American Journal of Mathematics, vol. 134, no. 1, pp.\\n109\\u2013139, 2012.\\n\\n[3] Y.-H. Kim and R. J. McCann, \\u201cContinuity, curvature, and the general covariance of optimal transportation,\\u201d Journal of the European\\nMathematical Society, vol. 12, no. 4, pp. 1009\\u20131040, 2010.\\n\\n[4] G. De Philippis and A. Figalli, \\u201cThe Monge\\u2013Amp`ere equation and its link to optimal transportation,\\u201d Bulletin of the American\\nMathematical Society, vol. 51, no. 4, pp. 527\\u2013580, 2014.\"}", "{\"title\": \"Response to Reviewer P2kY\", \"comment\": \"We thank the reviewer for the careful reading and the pertinent comments. Please find our itemized responses below.\\n\\n**Response to weaknesses:**\\n\\nThank you for this suggestion. *In the revised manuscript's Appendix D, we detailed the worst-case runtime complexity analyses for the SDP computation associated with the NNCC forward problem and the MTW$(\\\\kappa)$ forward problem*. We discuss the scaling w.r.t. the dimension $n$ and the number of semialgebraic constraints $\\\\ell$. In summary, the SOS worst-case complexity is polynomial in $n$ and sub-quadratic in $\\\\ell$. We point out that our analyses are valid for off-the-shelf generic interior point SDP solvers as used in our numerical examples but do not account for the sparsity patterns induced by the block diagonal structure specific to our formulations. This is why the runtimes observed in our numerical examples are better than the theoretical worst-case derived in the newly included Appendix D. In practice, additional speed-ups are possible for specific problems by taking into account suitable symmetries of the cost $c(x,y)$ and/or the manifold $\\\\mathcal{M}$ (e.g., translation and/or rotational invariance). \\n\\nFor the inverse problem, in principle, similar analysis is possible. However, the complexity then is governed by the desired tightness of the semialgebraic inner approximation of the region where the NNCC or the MTW$(\\\\kappa)$ conditions hold. *For this reason, in the revised manuscript, we have reported the runtimes for the solution of the inverse problems (numerical examples 3 and 4), as suggested by the Reviewer o9bV*. \\n\\n**Response to questions:**\\n\\nThis work is the first computational approach on OT regularity verification. All existing results for OT regularity verification in the literature (please see refs. in paragraph before \\\"Contributions\\\" on page 2) are analytical and are available for only a few specific cost functions. Such calculations, when possible, are tedious even for researchers in mathematical analysis and do not generalize for variations in cost functions. This is what motivated our study. As the first computational work in this area, our results show that the proposed SOS framework is extremely promising in terms of accuracy and efficiency in handling non-trivial cost functions, which would be otherwise challenging to do via analytic calculations.\"}", "{\"title\": \"Summary of improvements\", \"comment\": [\"We are grateful to the reviewers for their careful reading of our work and for the perceptive inputs provided.\", \"Here is a summary of the major improvements we have made to address the reviewers' comments. For the reviewers' convenience, all revisions in the manuscript are marked in blue.\", \"**(Accessible writing with pedagogical examples)** Based on the reviewers' comments, we have included a *new Appendix A titled \\\"Nonnegative Polynomials and Sum-of-Squares Programming\\\"* (cited in Sec. 2.2) that explains the SOS polynomial and its decomposition (Sec. A.1), the matrix-valued SOS polynomial and its decomposition (Sec. A.2), and the SOS polynomials and Archimedean semialgebraic sets (Sec. A.3). In addition to definitions and explanations, this includes several examples to illustrate the connections between different ideas (e.g., gap between SOS and nonnegative polynomial). The Sec. A.3 is specifically purposed to explain the equivalence between (7) and (8) for Archimedean semialgebraic sets. In Sec. 2.2, an example is included right after the definition of semialgebraic set. We believe these additions will help the readers who may not be familiar with some of the background ideas.\", \"**(Computational complexity)** Following the suggestions from multiple reviewers, we performed a detailed computational complexity analysis for the NNCC/MTW forward problems in terms of the dimension $n$ and the number of semialgebraic constraints $\\\\ell$. They scale polynomially in $n$ and sub-quadratic in $\\\\ell$. The analysis is included in *new Appendix D titled \\\"Computational Complexity\\\"* (cited in the end of Sec. 3.1) where Sec. D.1 and D.2 detail the complexity analysis for the NNCC and the MTW forward problems. respectively. In the individual responses to the reviewers, we explain that these results are for off-the-shelf SOS/SDP solvers that do not exploit our problem structure, and thus further speed-ups should be possible by customized solvers. Per suggestion of Reviewer o9bV, the runtimes for the inverse problems are now reported at the end of the numerical examples 3 and 4. Here too, most of the computation overhead was found to be in problem parsing and SDP setup to deploy off-the-shelf solvers. These results are very encouraging for a first computational work on OT regularity.\", \"**(Improved explanations)** In Sec. 1 (Introduction), we slightly rephrased the 3rd paragraph, included a new paragraph after assumption **A1** to explain why this assumption is benign. The latter also mentions with citation that one of the early driving factors for OT regularity theory was the engineering problem of reflector antenna design which was reformulated as an OT problem with non-Euclidean ground cost $c(x,y)=-\\\\log\\\\|x-y\\\\|$ over the sphere. In Sec. 2.1, paragraph after Definition 2, we clarified some differential geometric issues raised by Reviewer o9bV.\", \"**(Notational improvement and simplifications)** We fixed the matrix-SOS notation throughout to distinguish it from scalar SOS notation, clarified that as well as the index notation in the revised Table 1. Some additional notations are also included in that Table. Some formulas were simplified in Proposition 13 within Appendix B.\", \"**(Better positioning the work w.r.t. OT literature)** In addition to mentioning the antenna design problem, we added a new paragraph at the end of Sec. 2.1 explaining why certifying NNCC or MTW condition are of interest in designing algorithms for solving unregularized OT problems with general costs, i.e., such certifications are of interest beyond the regularity of OT map. We wrote a detailed response to Reviewer fQTp on the growing relevance of solving OT problems with non-Euclidean ground costs in ML applications. *We also included the Appendix F listing the OT with non-Euclidean ground cost examples with references, pointing out that most of these are amenable to the proposed SOS framework*.\", \"**Scope and fit for ICLR:**\", \"While SOS programming is well-known in ML and optimization, the novelty this work is its application to automatically certify/falsify OT regularity, and computationally discover regions where local regularity holds. This is an existing problem: its need is recognized in both ML and OT theory literature, but existing approaches rely on unwieldy analytical calculations hand-crafted for very few costs. As a result, these calculations and the related techniques in the existing literature do not generalize. Being the first work on computational verification of OT regularity, this work opens door for a new research direction in computational OT, especially for non-Euclidean OT that is increasingly finding applications in ML including significant ICLR footprints in the recent years. We believe this work--its topic, style, novelty and significance of results--are well within the scope of ICLR.\"]}", "{\"summary\": \"In the context of OT, the fourth-order Ma-TrudingerWang (MTW) tensor associated with this ground cost function provides a notion\\nof curvature. The non-negativity of this tensor plays a crucial role for establishing continuity for the Monge optimal transport map. In general, it is difficult to analytically verify this condition for any given ground cost. This paper proposes a provably correct computational approach which provides certificates of non-negativity for the MTW tensor using Sum-of-Squares (SOS) programming. The authors further show that their SOS technique can also be used to compute an inner approximation of the region where MTW non-negativity holds. They apply this proposed SOS programming method to several practical ground cost functions to approximate the regions of regularity of the corresponding OT maps. They also evaluate the proposed SOS computational framework for both the forward and the inverse problems.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"This is a mathematically solid paper and resolve an interesting theoretical problem. It proposes a provably correct computational framework that can certify or falsify the non-negativity of the MTW tensor associated with a given ground cost under the assumptions that the ground cost is a rational and semialgebraic function. The proposed approach is based on sum-of-squares (SOS) programming and can be of independent interests. The authors also demonstrate that the proposed computational framework can be applied to non-rational ground cost function given that the elements of the MTW tensor are rational and can be used to solve the inverse problem.\", \"weaknesses\": \"The main concern is that this paper seems irrelevant to the ICLR community. In practice, the common ground cost function would be Euclidean and I am not really sure if it is practically important to conduct the computational verification of OT regularity for a general class of non-Euclidean ground cost functions. I encourage the authors to elaborate on potential ML applications or benefits of their work on non-Euclidean optimal transport. For example, would you like to discuss how your method could enhance practical ML systems that use OT, or to provide concrete examples of where non-Euclidean costs arise in ML problems?\\n\\nAnother concern is the poor quality of writing. In particular, there are many advanced mathematical notations, such as semialgebraic functions and Archimedean sets, which are not accessible to the ICLR audience. Both Section 2 and Section 3 are written in a technical way without sufficient intuitive explanations or examples alongside the formal mathematical notation. In my humble opinion, the major contribution of the paper would be the SOS formulations for computing the MTW tensors, which is certainly nontrivial, but this paper would much better fit the applied mathematics oriented journal.\", \"questions\": \"1. I encourage the authors to elaborate on potential ML applications or benefits of their work on non-Euclidean optimal transport. For example, would you like to discuss how your method could enhance practical ML systems that use OT, or to provide concrete examples of where non-Euclidean costs arise in ML problems?\\n\\n2. I encourage the authors to improve accessibility of technical parts (e.g., the description of forward problems and inverse problems), such as adding more intuitive explanations or examples alongside the formal mathematical notation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Questions: Reviewer XnX2\", \"comment\": \"**Response to Questions:**\\n\\n* (**On OT applications where regularity of Monge map is crucial**) Historically, one of the motivations driving the development of the regularity theory for the Monge map was the engineering problem of reflector antenna design (see [1] cited in manuscript's line 169-170), which is an OT problem with cost $c(x,y) = -\\\\log\\\\|x-y\\\\|$ on the sphere. This is indeed an example where elements of the MTW tensor are rational although $c$ itself is not: a situation where our proposed SOS framework can automatically certify NNCC/MTW conditions. *In the revised manuscript, this motivation is now explicitly stated in the new paragraph added after the assumption **A1***. The regularity knowledge of the OT map is also crucial for designing custom approximation algorithms for the Monge map itself with a priori guarantees on the quality of approximation, and for designing algorithms to solve unregularized OT problems with generic costs [2].\\n\\n* (**On polynomial non-negativity and SOS**) *Per Reviewer's suggestion, we have now included Appendix A in the revised manuscript, which elaborates on the difference between the two problems (namely, optimization problems with polynomial non-negativity constraints and the SOS constraints) and conditions under which the two problems are equivalent.*.\\n \\n* (**On three types of non-negativity conditions**) The precise relationship between the three conditions studied in this paper and the continuity of the optimal transport are the following.\\n1) The pioneering work [3] used the $MTW(\\\\kappa)$ condition and established that if the measures were sufficiently regular and the supports were relatively $c$-convex, then the OT map would be smooth.\\n 2) In a subsequent paper, [4] showed a similar result for costs which satisfy $MTW(0)$ rather than the previous stronger condition. However, for this result it was necessary to impose stronger assumptions on the supports of the measure (i.e., strong relative $c$-convexity). For this reason, this paper does not cover all of the results proven in the original 2005 paper [3]. Later [1] provided a interpretation of $MTW(0)$ in terms of convex analysis which demonstrates the importance of this condition.\\n3) NNCC is a strengthening of the MTW condition. Strictly speaking, this condition is not directly related to the regularity of optimal transport except in that it implies the MTW condition. However, it plays an important role in the study of optimal transport because it implies that certain quantities are convex (see Lemma 6.1 of [5]).\\n\\n- (**On computational burden for higher dimensions**) In example 2, numerical experiments were performed with higher dimensions for the Log-partition cost function. This example is particularly useful in evaluating the runtime complexity of the algorithm because the MTW tensor for this cost is positive definite for all $n$ (but the SOS framework does not know that), and we provide certificates of positivity for all $n$ up to $6$ (findings were summarized in Table 2). In other words, the SOS framework can be used as tool for computational discovery. *In the revised manuscript, we have also provided a thorough runtime complexity analysis in Appendix D*. To summarize, the underlying SDP problem, obtained from the SOS formulation of the forward problem, has polynomial scaling w.r.t. dimension $n$ and sub-quadratic scaling w.r.t. the number of semialgebraic constraints $\\\\ell$.\\n\\n[1] G. Loeper, \\u201cOn the regularity of solutions of optimal transportation problems,\\u201d Acta Math, vol. 202, pp. 241\\u2013283, 2009.\\n\\n[2] M. Jacobs and F. L\\u00b4eger, \\u201cA fast approach to optimal transport: The back-and-forth method,\\u201d Numerische Mathematik, vol. 146, no. 3, pp. 513\\u2013544, 2020.\\n\\n[3] X.-N. Ma, N. S. Trudinger, and X.-J. Wang, \\u201cRegularity of potential functions of the optimal transportation problem,\\u201d Archive for\\nrational mechanics and analysis, vol. 177, pp. 151\\u2013183, 2005.\\n\\n[4] N. S. Trudinger and X.-J. Wang, \\u201cOn the second boundary value problem for Monge-Amp\\u00b4ere type equations and optimal transportation,\\u201d Annali della Scuola Normale Superiore di Pisa-Classe di Scienze, vol. 8, no. 1, pp. 143\\u2013174, 2009.\\n\\n[5] A. Figalli, Y.-H. Kim, and R. J. McCann, \\u201cWhen is multidimensional screening a convex program?\\u201d Journal of Economic Theory, vol. 146, no. 2, pp. 454\\u2013478, 2011.\"}", "{\"title\": \"Answer to the the rebuttal\", \"comment\": \"I deeply thank the authors for their significant revision and their clear responses, which helps to improve the readability of the contribution. Due to my unfamiliarity with the domain, I choose to keep my current score.\"}", "{\"summary\": \"This paper presents a computational method for assessing the regularity structure of optimal transport plans. Specifically, the source and target distribution are continuous distributions on a manifold, the cost is some smooth function, and the regularity in question is the regularity of the pushforward map from source to target distributions. In addition, the computational tool computes the region where the MTW holds.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents an under-studied aspect of regularity of Monte map. Moreover, the paper contains numerical algorithm and is practical.\\n2. The inverse problem is well thought through. \\n3. The numerics seems promising for a relatively difficult problem in OT.\", \"weaknesses\": \"1. While it is not an issue for the author per se, it is unfortunate that the SOS condition only applies to semialgebraic sets for the manifold.\\n\\n2. The writing doesn't seem to include sufficient focus on the gap between the non-negativity condition and the SOS counterpart. It doesn't seem to be clear whether SOS is too strong for this case. \\n\\n3. The presentation is not clear for this paper. For example, the indicting convention for something like $c_{ij, p}$ is quite confusing. The only mention of $c_{ij, kl}$ in the earlier part is too far away, and the readers can not be expected to find where the notation is and also generalize from $c_{ij, kl}$ to $c_{ij, p}$. This is too confusing for this conference.\\n\\n4. Theorem 5 seems currently wrong: the function $F$ in (5) is matrix-valued. It doesn't seem correct to somehow assess if this matrix-valued rational function belongs to \\\\sum_{SOS}[x, y]. *Unless this issue is either resolved or explained, this reviewer cannot increase the score above the acceptance threshold.*\\n\\n5. The author doesn't seem to use certain terms in differentiable geometry correctly. For x, y on different points of $\\\\mathcal{M}$, one cannot directly apply an \\\"inner product\\\"/contraction between the tangent plane on $x$ and the contingent plane of $y$. This issue is resolved, however, if $\\\\mathcal{M}$ is a subset of $\\\\R^{n'}$ and the differentiable structure comes from the Euclidean space. The author is advised to change the writing on this and make sure no further major mistake such as this is made.\", \"minor_comments\": [\"The logic at line 293 is wrong: \\\\eta(\\\\xi) = 0 should belong to after \\\\forall.\"], \"questions\": \"1. The manuscript doesn't contain (A) a discussion on the computational complexity for checking the NNCC condition and the two types of MTW conditions. Also, the discussion on (B) the complexity of the inverse problem is missing. For (A), the author is advised to provide an analysis. For (B), the author is advised to also provide a runtime in the work, i.e. the time it takes to plot the figures.\\n\\n2. What is the relationship between Monge OT map (which is Borel) and a Brenier map (which is point to point)? Is regularity of Brenier map, supposing it exists, also something that this formulation can answer?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Closing Response to Reviewer fQTp\", \"comment\": \"Dear Reviewer fQTp,\\n\\nIf you feel we have adequately addressed the questions and weaknesses in our rebuttal and significantly revised version of the manuscript, kindly consider increasing the score. \\n\\n*We note in particular the addition of Appendix F in the re-revised manuscript that specifically lists examples of OT with non-Euclidean ground costs, and that they are within the purview of our method -- clarifying a point you raised*. \\n\\nThanks again for your time and feedback to help improve our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"In this paper, the authors propose the first computational approach to certify the regularity of the Monge map for optimal transport problems with specific conditions on the transport cost and the state spaces. To be more precise, they evaluate the non-negativity of the fourth-order Ma-Trudinger-Wang (MTW) tensor associated to the transport cost, which has been proved to be a sufficient condition to establish the continuity of the Monge transport map under proper conditions on the marginals of the transport plan. In this work, they consider three versions of this non-negativity condition, previously considered by related works. Their method consists in reformulating the MTW condition (for each of the three versions) into a sum-of-squares program defined on a semialgebraic setvia Putinar's Positivstellansatz, which can then be solved with efficient software. In particular, their approach assumes that the transport cost (or at least the corresponding MTW tensor) is a rational function defined over a two-state semialgebraic space (or at least, a two-state space that contains a semialgebraic space). They apply their framework to verify if the transport cost verifies the MTW condition or to find the largest semialgebraic set on which the transport cost verifies the MTW condition. They propose several convincing numeric experiments in small dimension for a large variety of non-trivial transport costs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Although I am no expert of SOS programming, the paper is well written so that it can be read by a large audience. In particular, the notation is easy to understand and Section 2 provides the most essential theoretical elements from OT and SOS programming domains to introduce the method.\", \"Given the elements of Section 2, the idea of proving the MTW non negativity via SOS programming seems to be a very good (and natural) idea. This work seems to be the first to answer this question with relatively moderate theoretical and computational frameworks.\", \"The formulation of the inverse problem is very interesting and once again well introduced and explained.\", \"The diversity of numerical experiments (i.e. non trivial transport costs) definitely proves the theoretical statements.\"], \"weaknesses\": [\"As it seems crucial to apply SOS programming, the transformation of the non-negativity condition into a SOS representation in Equation (8) would deserve more explanation, in the appendix for example. For non expert readers, this relation is hard to understand.\", \"The dimension of the numerical experiments is relatively low, while OT aims at solving large-scale problems.\", \"Although the problem tackled in this paper is interesting from a theoretical perspective, I am quite concerned by the effective application of this work to OT problems.\"], \"questions\": [\"Could you please provide examples of real-world OT applications where the knowledge of the regularity of the Monge map is crucial ?\", \"Could you please bring more details on the equivalence between non-negativity on polynomial terms and SOS representation ?\", \"I think it would be of interest to provide the results on the regularity of the Monge transport map given the three types of non-negativity conditions given in the paper.\", \"Have you considered experiments with higher dimension ? Is there any computational burden ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer o9bV on scaling and scope in ICLR\", \"comment\": \"We thank the reviewer for their quick feedback and sincere engagement in this discussion.\\n\\nAlthough the theoretical worst-case polynomial scaling of the SOS programming methods (not just our algorithm) seem large, SOS programming has been employed consistently and effectively to solve many large-scale problems in control and optimization \\u2014 a claim supported by our numerical examples (example 2) and other works exploiting sparsity of SOS optimization [2]. \\n\\nIt should be noted that SOS is much faster than cylindrical algebraic decomposition [https://en.wikipedia.org/wiki/Cylindrical_algebraic_decomposition] which returns the regions where a real valued polynomial is positive or negative but runs in doubly exponential time. \\n\\nPlease also note that the complexity analyses in our Appendix D are valid for off-the-shelf generic interior point SDP solvers as used in our numerical examples, but do not account for the sparsity patterns induced by the block diagonal structure specific to our formulations. This is why the runtimes observed in our numerical examples are better than the theoretical worst-case derived in the newly included Appendix D. In practice, additional speed-ups are possible for specific problems by taking into account suitable symmetries of the cost and/or the manifold (e.g., translation and/or rotational invariance), pre-solving the underlying SDP problems to eliminate unnecessary decision variables and constraints, employing relaxations such as diagonally dominant SOS (DSOS) [1] and sparse SOS (SSOS) [2]. \\n\\nWe also believe that, in addition to the practical performance of our algorithm, the theoretical contributions providing convex optimization algorithm to address an unsolved problem is significant and within the scope of ICLR. There are no other computational approaches available in the literature for this problem despite its relevance in ML.\", \"reference\": \"[1] A. A. Ahmadi and A. Majumdar, \\u201cDsos and sdsos optimization: more tractable alternatives to sum of squares and semidefinite optimization,\\u201d SIAM Journal on Applied Algebra and Geometry, vol. 3, no. 2, pp. 193\\u2013230, 2019.\\n\\n[2] Y. Zheng, G. Fantuzzi, and A. Papachristodoulou, \\u201cSparse sum-of-squares (sos) optimization: A bridge between dsos/sdsos and sos optimization for sparse polynomials,\\u201d in 2019 American Control Conference (ACC). IEEE, 2019, pp. 5513\\u20135518.\"}", "{\"title\": \"Response to Reviewer AeCx\", \"comment\": \"We thank the reviewer for the questions and suggestions. Please find our itemized responses below.\\n\\n**Response to weaknesses:**\\n\\nThe key assumption in our work is that the elements of the MTW tensor are rational, for which the cost function being rational is sufficient, not necessary (stated in the paragraph just before Sec. 3.1). In fact, we provided numerical examples 2 and 4 where $c$ is non-rational, but the elements of the respective MTW tensors are. The cost in example 2 came from stochastic portfolio theory. As another example, the development of OT regularity theory was motivated by the engineering problem of reflector antenna design (see [Loeper, 2009] cited in paragraph before Def. 1), which was cast as an OT problem with cost $c(x,y) = -\\\\log\\\\|x-y\\\\|$. This is indeed an example where elements of the MTW tensor are rational, although $c$ itself is not. Our proposed SOS framework can handle all the aforementioned situations. \\n\\nFurthermore, many cost functions in practice, e.g., those induced by the squared geodesic on a Riemannian manifold, are either already polynomials/rationals, or smooth enough to be well-approximated by polynomials/rationals. This is why our assumption for the elements of the MTW tensor being rational, is benign.\\n\\n*Following the reviewer's suggestion, we have added a paragraph clarifying the above the revised manuscript's line 66-72 (highlighted in blue) right after the assumption **A1**.*\\n\\n**Response to questions:**\\n\\nIt is a fair observation that in many real-world applications, the other hypotheses of the regularity theory will not hold, and thus, the transport may fail to be continuous. However, both the MTW condition and the NNCC condition provide detailed information about the sub-differential structure of $c$-convex function [1]. In particular, a cost function satisfies the MTW condition iff the $c$-subdifferential of any $c$-convex function is connected. A cost function satisfies NNCC iff the $c$-subdifferential of any $c$-convex function is convex in a certain sense. These facts generalize the classic result that the sub-differential of a convex function is a convex set.\\n\\nIn recent work, Jacobs and L\\\\'eger [2] developed a fast method for solving OT problems using a back-and-forth gradient descent. This method is notable because it does not use entropic regularization and converges rapidly. However, this algorithm relies on being able to rapidly compute the $c$-conjugate of a function. For the squared-distance cost, an algorithm for fast computation of Legendre transforms exists but depends crucially on the aforementioned convexity properties [3]. To adapt this algorithm to more general cost functions, the primary bottleneck appears to be a fast method for computing the $c$-subdifferential of a $c$-convex function. For cost functions that satisfy NNCC (or perhaps even MTW), it should be possible to develop rapid algorithms for $c$-conjugation, which would in turn allow us to find efficient algorithms to solve OT problems. This provides an algorithmic motivation behind *a priori* certification of MTW/NNCC, which is what our work is about. \\n\\n*In the revised manuscript, we have added a paragraph at lines 204 to 212 before Sec. 2.2 explaining this motivation*.\\n\\n[1] G. Loeper, \\u201cOn the regularity of solutions of optimal transportation problems,\\u201d Acta Math, vol. 202, pp. 241\\u2013283, 2009.\\n\\n[2] M. Jacobs and F. L\\u00b4eger, \\u201cA fast approach to optimal transport: The back-and-forth method,\\u201d Numerische Mathematik, vol. 146, no. 3,\\npp. 513\\u2013544, 2020.\\n\\n[3] Y. Lucet, \\u201cFaster than the fast Legendre transform, the linear-time Legendre transform,\\u201d Numerical Algorithms, vol. 16, pp. 171\\u2013185,\\n1997.\"}", "{\"title\": \"Response to Reviewer fQTp: relevance to ML\", \"comment\": \"We thank the reviewer for the perceptive comments. Since the weaknesses and questions are on the same two topics, we address them together.\\n\\n**On relevance to ML, relevance of non-Euclidean ground cost in OT**\\n\\n- An immediate consequence of having a computational method like ours that can automate certifying OT regularity, is that practitioners will be able to design custom approximation algorithms for the OT map itself with *a priori* quality of approximation guarantees, even with non-Euclidean $c$ on nontrivial geometric domains/manifolds. Currently this is not possible because OT regularity analysis has remained an area where case-by-case hand computation are available for a very limited number of settings, and these results/techniques do not generalize to other settings (costs, manifolds).\\n\\n- Even when the regularity of the OT map does not hold, still verifying the MTW/NNCC conditions are of importance for designing algorithms for solving the unregulaized OT problems with general costs. *In the revised manuscript, we explain this with relevant citations right before Sec. 2.2*.\\n\\n- *In the revised manuscript, the new paragraph after assumption **A1** mentions with citation that the historical motivation driving OT regularity theory was in fact the engineering problem of reflector antenna design*, cast as an OT problem with non-Euclidean ground cost $c(x,y)=-\\\\log\\\\|x-y\\\\|$ on the sphere, which is a non-rational cost but for which the entries of the MTW tensor are rational (thus the proposed SOS framework applies).\\n\\n- OT with non-Euclidean ground costs are common in computer graphics [1] and in high dimensional single cell data analysis [2]. In the computer graphics context, the manifolds are geometric domains/3D surfaces, and the non-Euclidean $c(x,y)$ is the squared geodesic distance over these manifolds. In OT over the single cell data too, the non-Euclidean $c(x,y)$ is induced by the squared geodesic of a (curved) lower dimensional manifold embedded in the (flat) high dimensional state space [3]. Despite being very different applications, both the graphics and the single cell data share the commonality that the squared geodesic, and thus the non-Euclidean $c$ are not analytically available, but are learnt from data via small-time asymptotic of the heat kernel using Varadhan's formula; see [1,2]. The associated Monge-Kantorovich OT problems are then solved with the numerically learnt non-Euclidean $c$, but finding the corresponding OT maps remain challenging [4]. One potential application of our framework can be the following workflow: (i) approximating the non-Eulcidean $c$ via polynomials/rationals, (ii) computationally finding the regularity estimates/domains using the proposed forward/inverse problems, and (iii) then using these estimates to design custom approximants for the corresponding OT maps with guarantees on the quality of approximation.\\n\\n- Another source of non-Euclidean ground cost is the family of *exponentially concave functions* which has important applications in mathematical finance and information theory [5]. Ref. [6] showed that if one uses the free-energy as a cost function, then the solutions of the associated OT problems will be induced by exponentially concave function. OT regularity theory for this non-Euclidean cost function is available [7] but this serves as a prototypical example of where it is advantageous to generalize the cost function.\\n\\n[1] J. Solomon, F. De Goes, G. Peyr\\u00b4e, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas, \\u201cConvolutional wasserstein distances:\\nEfficient optimal transportation on geometric domains,\\u201d ACM Transactions on Graphics (ToG), vol. 34, no. 4, pp. 1\\u201311, 2015.\\n\\n[2] G. Huguet, A. Tong, M. R. Zapatero, C. J. Tape, G. Wolf, and S. Krishnaswamy, \\u201cGeodesic Sinkhorn for fast and accurate optimal\\ntransport on manifolds,\\u201d in 2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2023,\\npp. 1\\u20136.\\n\\n[3] G. Huguet, D. S. Magruder, A. Tong, O. Fasina, M. Kuchroo, G. Wolf, and S. Krishnaswamy, \\u201cManifold interpolating optimal-transport\\nflows for trajectory inference,\\u201d Advances in neural information processing systems, vol. 35, pp. 29 705\\u201329 718, 2022.\\n\\n[4] G. Schiebinger, J. Shu, M. Tabaka, B. Cleary, V. Subramanian, A. Solomon, J. Gould, S. Liu, S. Lin, P. Berube et al., \\u201cOptimal-transport\\nanalysis of single-cell gene expression identifies developmental trajectories in reprogramming,\\u201d Cell, vol. 176, no. 4, pp. 928\\u2013943, 2019.\\n\\n[5] G. Alirezaei and R. Mathar, \\u201cOn exponentially concave functions and their impact in information theory,\\u201d in 2018 Information Theory\\nand Applications Workshop (ITA). IEEE, 2018, pp. 1\\u201310.\\n\\n[6] S. Pal and T.-K. L. Wong, \\u201cExponentially concave functions and a new information geometry,\\u201d The Annals of probability, vol. 46,\\nno. 2, pp. 1070\\u20131113, 2018.\\n\\n[7] G. Khan and J. Zhang, \\u201cThe K\\u00a8ahler geometry of certain optimal transport problems,\\u201d Pure and Applied Analysis, vol. 2, no. 2, pp.\\n397\\u2013426, 2020.\"}", "{\"summary\": \"This paper presents a computational approach using Sum-of-Squares (SOS) programming to verify and approximate regions of regularity for optimal transport maps, specifically focusing on the Ma-Trudinger-Wang (MTW) tensor. Regularity of optimal transport maps is critical in many machine learning applications, and this regularity can be assured by the non-negativity of the MTW tensor. However, verifying this condition analytically for general cost functions is challenging. The authors propose using SOS programming to generate certificates of non-negativity for the MTW tensor across a broader range of cost functions, potentially providing computationally verified regions of regularity. Their method is applied to both verifying non-negativity conditions (the \\\"forward problem\\\") and to computing inner approximations of regularity regions (the \\\"inverse problem\\\") for several ground cost functions, demonstrating the effectiveness of SOS programming in this context. This computational framework contributes a systematic approach to certifying regularity in optimal transport, potentially facilitating its application in various machine learning tasks. The paper concludes by applying the proposed framework to several common examples in the literature.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper\\u2019s strength lies in its innovative application of Sum-of-Squares (SOS) programming to address the longstanding challenge of verifying the regularity of optimal transport (OT) maps through the Ma-Trudinger-Wang (MTW) tensor. SOS programming is a well-established tool in optimization and control, but this work extends it to OT regularity, opening new possibilities for computational verification of the MTW conditions in general cases where analytic approaches are intractable. The paper also showcases the practical efficacy of the approach by applying it to various cost functions, demonstrating its flexibility and adaptability across different scenarios.\", \"weaknesses\": \"A missing key aspect in the paper is the time complexity analysis for the proposed framework. What's the computational efficiency of SOS programming in verifying regularity of the different OT problems? While the authors showcase the method\\u2019s application to specific examples and shared the wall-clock time, a time complexity discussion could be a good addition to the paper.\", \"questions\": \"1. What are some other regularity verification methods? how does the SOS programming compare to them in terms of accuracy and efficiency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper presents a computational approach using Sum-of-Squares (SOS) programming to certify the Ma-Trudinger-Wang regularity condition for optimal transport (OT) maps. While technically sound and offering an interesting contribution, the reviewers exposed several weaknesses. A significant concern, particularly emphasized in the final discussions, is the lack of clear exemplification of why this work is relevant for machine learning.. The paper does not convincingly demonstrate how the proposed method provides new insights into ML problems where OT is used or enhances practical ML applications. Moreover, reviewers raised concerns about the high computational complexity of the approach, its limited scalability to higher dimensions, and its focus on semialgebraic cost functions, which may restrict practical applicability. Additional weaknesses include the technical presentation, which lacks accessibility for a broader ML audience, and unclear positioning within the OT and ML literature. Despite the efforts made in revisions, these issues were not sufficiently addressed, and the paper fails to bridge the gap between theoretical contributions and practical relevance.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers sought clarifications on the work\\u2019s relevance to ML, its scalability, and its computational overhead. While the authors provided detailed responses and additional explanations, the rebuttal did not resolve fundamental concerns about the practical impact of the method and its relevance to the ICLR community. These unresolved issues ultimately led to the decision to reject the paper.\"}", "{\"title\": \"Response to Reviewer o9bV on applicability of the work\", \"comment\": \"We thank the reviewer for the comments. Some remarks are in order.\\n\\n**On typical cost functions**\\n\\nThe reviewer is right that historically standard cost functions in OT have been limited to squared Euclidean and EMD. However, the scope of OT is much broader, and in recent years OT in non-Euclidean setting is appearing across ML applications. *In the re-revised manuscript, we have now included Appendix F with a Table listing such costs and references, and pointed out that in most of these settings, our proposed framework applies*. \\n\\n**On EMD cost**\\n\\nThe reviewer correctly notes that the EMD cost does not satisfy assumption A1, but this particular case is technically subtle. The regularity problem in this setting is still not fully understood: existing partial regularity results are on the ray-monotone OT plan. Many of these proofs in fact consider the cost function $\\\\sqrt{\\\\epsilon+||x-y||^2}$, which approximates the EMD and satisfies the MTW condition. By taking $\\\\epsilon$ to zero, it is possible to recover some weaker regularity results (see, e.g., https://www.sciencedirect.com/science/article/pii/S0021782405000164?via%3Dihub)\\n\\nTherefore, there are good reasons to consider the MTW tensor even if one only cares about the original Monge cost. This particular approximation can be fit into the SOS framework by introducing auxiliary variables as we did with the **Example 3** in manuscript. Another cost function which approximates the Monge cost is $c(x,y) = ||x-y|| - \\\\epsilon \\\\log(||x-y||)$, and this satisfies the MTW condition for $||x-y||$ sufficiently large. \\n\\n>In addition, generically, the fundamental theorem of LP says that the optimal transport solution has an almost unique identification, in the sense that a point can at most on average couple with two other points. So what this means is that the regularity and uniqueness in the practical discrete OT case is almost always true. Therefore, the regularity would almost always happen in practice. In practice, non-uniqueness is not a big issue for OT-based ML applications, and the added paragraph in the paper isn't strong enough, as unregularized OT is almost never used due to its daunting computational complexity.\\n\\nThese statements are referring to the Kantorovich problem of optimal transport, where solutions are unique but split mass. The regularity for practical discrete OT case is almost always *false*, since the OT coupling will not be Monge. Even when one has an efficient way to solve discrete Kantorovich OT, with or without regularization, it is known to be computationally difficult to extract the Monge map from the support of that optimal Kantorovich coupling. The need and difficulties for the same in ML context are noted in refs. cited in lines 35-36 of our Introduction.\\n\\n**On the importance of solving unregularized OT**\\n\\nUntil very recently, if one wanted to compute discrete OT on a large scale, the only feasible method was to approximate it using a Sinkhorn type algorithm. This development was motivated by the daunting LP complexity which in turn refers to invocation of generic LP solvers. But discrete OT is not a generic LP, and it remains possible to further exploit the structure of the transportation polytope. For instance, the back-and-forth algorithm cited in lines 205-212 is known to solve *unregularized* OT instances faster than the Sinkhorn regularized implementation in POT toolbox (https://pythonot.github.io/). We explained in lines 205-212 why checking NNCC/MTW condition can have broader impact for such algorithm design, beyond checking the regularity of Monge map.\"}", "{\"comment\": \"I thank the authors for their response. I believe that this added discussion will be helpful for placing this work in the literature. I will maintain my current score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Weaknesses: Reviewer XnX2\", \"comment\": \"We thank the reviewer for all the comments and questions.\\n\\n**Response to Weaknesses:**\\n\\n- (**On relation between SOS and non-negativity**) We agree that the Archimedean property and that it enables an equivalence between SOS and polynomial non-negativity is not obvious. *In the revised manuscript, we added an exposition in Appendix A about this circle of ideas. In particular, in the main body of the manuscript, in the sentence containing equation (8), we cited Appendix A.3 which specifically discusses this equivalence*.\\n\\n- (**On dimension of numerical experiments**) *In the revised manuscript, we have now included Appendix D to explain the computational scaling w.r.t. dimension. This Appendix D details the runtime computational complexity w.r.t. both dimension $n$ and the number of semialgebraic constraints $\\\\ell$*. For the forward problems, the SOS worst-case complexity is polynomial w.r.t. $n$ and sub-quadratic in $\\\\ell$. We point out that our analyses are valid for off-the-shelf generic interior point SDP solvers as used in our numerical examples, but do not account for the sparsity patterns induced by the block diagonal structure specific to our formulations. This is why the runtimes observed in our numerical examples are better than the theoretical worst-case derived in the newly included Appendix D. In practice, additional speed-ups are possible for specific problems by taking into account suitable symmetries of the cost $c(x,y)$ and/or the manifold $\\\\mathcal{M}$ (e.g., translation and/or rotational invariance), pre-solving the underlying SDP problems to eliminate unnecessary decision variables and constraints, employing relaxations such as diagonally dominant SOS (DSOS) [1] and sparse SOS (SSOS) [2].\\n\\n- (**On application of this work to OT problems**) This work contributes to the OT literature in two ways. First is a direct contribution in the sense automated certification of the MTW/NNCC condition, as demonstrated in our work, will allow the researchers to a priori verify the continuity of the OT map for the specific cost/geometry of interest. This will in turn help design custom approximation algorithms for the OT map with guarantees. Second is an indirect contribution in the sense even when the OT map may fail to be continuous, both the MTW condition and the NNCC condition provide detailed information about the sub-differential structure of $c$-convex function [3]. This can in turn enable the design of fast numerical algorithms [4] for solving *unregularized* OT problems for general cost functions. *In the revised manuscript, we have explained this prospect in the paragraph right before Sec. 2.2*. \\n\\n[1] A. A. Ahmadi and A. Majumdar, \\u201cDsos and sdsos optimization: more tractable alternatives to sum of squares and semidefinite\\noptimization,\\u201d SIAM Journal on Applied Algebra and Geometry, vol. 3, no. 2, pp. 193\\u2013230, 2019.\\n\\n[2] Y. Zheng, G. Fantuzzi, and A. Papachristodoulou, \\u201cSparse sum-of-squares (sos) optimization: A bridge between dsos/sdsos and sos\\noptimization for sparse polynomials,\\u201d in 2019 American Control Conference (ACC). IEEE, 2019, pp. 5513\\u20135518.\\n\\n[3] G. Loeper, \\u201cOn the regularity of solutions of optimal transportation problems,\\u201d Acta Math, vol. 202, pp. 241\\u2013283, 2009.\\n\\n[4] M. Jacobs and F. L\\u00b4eger, \\u201cA fast approach to optimal transport: The back-and-forth method,\\u201d Numerische Mathematik, vol. 146, no. 3,\\npp. 513\\u2013544, 2020.\"}", "{\"title\": \"Closing Response to Reviewer P2kY\", \"comment\": \"Dear Reviewer P2kY,\\n\\nIf you feel we have adequately addressed the questions and weaknesses raised in our rebuttal and significant revised version of the manuscript, kindly consider increasing the score. Thanks again for your time and feedback to help improve our work.\\n\\nBest regards,\\n\\nAuthors\"}" ] }
0A6f1b66pE
Unleashing the Power of Selective State Space Models in Vision-Language Models
[ "Honghao Chen", "Yibing Song", "Shoufa Chen", "Chongjian GE", "Kaiqi Huang" ]
While emerging multi-modal large language models (MLLM) have demonstrated impressive advances, the quadratic complexity of their Transformer-based LLMs (3B or larger) inevitably leads to considerable computational overhead. On the other hand, the recently proposed selective state space model (i.e., Mamba) enjoys both model capacity and computational efficiency, making it an ideal component to enhance MLLM's efficiency and performance. However, recent attempts to introduce Mamba into MLLMs simply replace their LLMs with Mamba, ignoring the unique characteristics of either side. We argue that such a naive combination cannot exhibit the potential of Mamba in MLLMs. In this paper, we delve into harnessing Mamba's unique properties, and propose tailored designs from both multi-modal input and architectural perspectives to unleash its true power. First, we fully utilize Mamba's linear complexity to construct visual long sequences for a thorough perception at a minor efficiency burden. To integrate the scanning mechanism with the built visual long sequence, we devise a novel cross-stitch scanning approach to capture and fuse spatial and semantic properties simultaneously, enhancing the interaction of visual information and the vision-language alignment. Built upon these designs, we propose MambaVLM, a simple yet effective MLLM framework that exhibits highly competitive results across multiple benchmarks. Moreover, our framework is also compatible with Transformer-based LLMs (e.g., Vicuna), demonstrating remarkable training and inference efficiency. Notably, with only 0.66M data and 14 hours training on a single A800 node, our MambaVLM outperforms LLaVA-1.5 by significant margins and performs on par or even better than the 1.4B data trained Qwen-VL. The appealing results from both effectiveness and efficiency aspects indicate the promising prospects of Mamba in MLLMs.
[ "Vision-Language Models; Mamba;" ]
https://openreview.net/pdf?id=0A6f1b66pE
https://openreview.net/forum?id=0A6f1b66pE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jYYBBuLfcy", "aNqaYbonwc", "KYHkdEkz3y", "C77aBujYsP", "6f6yKR3u3C", "2uH12mldOk" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1730731505393, 1731505201610, 1731137765840, 1730646686860, 1730573521913, 1731001482406 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4294/Reviewer_MzvA" ], [ "ICLR.cc/2025/Conference/Submission4294/Authors" ], [ "ICLR.cc/2025/Conference/Submission4294/Reviewer_nHm8" ], [ "ICLR.cc/2025/Conference/Submission4294/Reviewer_7YET" ], [ "ICLR.cc/2025/Conference/Submission4294/Reviewer_ApMf" ], [ "ICLR.cc/2025/Conference/Submission4294/Reviewer_uEcL" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces a customized version of the Mamba framework within multimodal large language models (MLLMs). This framework has three core components: a visual long sequence, a Mamba projector, and a Mamba LLM. Experimental results on various benchmarks suggest improved performance and speed compared to several existing methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The framework is concise and clear, making the proposed approach easy to understand.\", \"weaknesses\": \"1): Limited Novelty in Technical Contribution: While the paper proposes a \\\"visual long sequence\\\" as part of the framework, a significant body of literature already exists on augmenting visual features using ensembles of different visual encoders, as demonstrated in works such as [A-D]. The design of the Mamba projector, specifically its cross-stitch scanning scheme that concatenates four scanning paths, seems heuristic rather than theoretically grounded.\\n\\n2): Unclear Motivation for Mamba Projector: The Mamba projector, the primary technical contribution of this paper, has an unclear motivation. The 1x1 convolutional MLP layer can be treated as a full attention layer, suggesting that the Mamba projector is an approximation. Lines 250\\u2013253 argue that \\\"a simple MLP layer may not be able to accomplish sufficient vision-language alignment and interaction of different visual features. Therefore, we devise a lightweight mamba projector\\u2026\\\" However, this rationale does not sufficiently justify the addition of the Mamba projector.\\n\\n3): Unfair Experimental Comparisons: For instance, in Table 4, using a longer visual sequence generally increases latency. Models such as TinyLLaVA and MobileVLMv2 should be substituted with the Mamba LLM. In Table 2, MambaVLM shows superior performance, largely attributed to encoder ensembling\\u2014a common approach in the literature.\\n\\n4): Presentation Quality: The paper\\u2019s overall clarity and presentation could benefit from further refinement.\", \"references\": \"[A]: BRAVE: Broadening the Visual Encoding of Vision-Language Models, ArXiv.\\n\\n[B]: Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs, CVPR 2024.\\n\\n[C]: Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders, ArXiv.\\n\\n[D]: Law of Vision Representation in MLLMs, ArXiv.\", \"questions\": \"See Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their valuable comments and we will revise accordingly.\"}", "{\"summary\": \"This paper propose a new way to integrate the Mamba architecture into the multi-modal large language models (MLLM). The technical contribution include: 1. propose using visual long sequence to utilize Mamba's linear complexity. 2. design a cross-stitch scanning approach to extract and combine spatial and semantic features simultaneously. The proposed method outperforms LLaVA-1.5 with less training time and better inference efficiency, and achieve similar performance with model trained on larger dataset such as Qwen-VL.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method achieve competitive results on benchmarks like Open-ended VQA and challenge sets. It outperforms LLaVA-1.5 with less training time.\\n\\n2. The proposed method has good intuition on how to better utilize the Mamba's efficiency with good visualizations.\", \"weaknesses\": \"1. The proposed method is likely to be dependent on vision encoders. It would be more solid if the author could conduct additional experiments on encoders other than DINOv2 + SigLIP. Also, the author does not show how proposed method perform on single vision encoder MLLM.\\n\\n2. There is not enough ablations experiments on the scanning orders. For example, no comparison with only using Hv1.\", \"questions\": \"1. In the introduction, the authors mention that the proposed framework is also compatible with Transformer-based LLMs, but there seems no experiments on applying the proposed method on transformer LLMs?\\n\\n2. What is the Merge operator in the equation (8)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a new variant of vision-language model (VLM) called MambaVLM, which introduces multiple improvements to previous VLM with Mamba method, Cobra. Specifically, the paper proposes to concat the visual features from DINOv2 and SigLIP by sequence axis instead of the channel-axis in Cobra, followed by a new Mamba-based projector. Performance is validated on various VLM benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is easy to understand with clear illustrations on the proposed methods.\\n2. According to the experiments, MambaVLM achieves overall better performance compared to previous VLMs, such as Qwen-VL, LLaVA-1.5, and Cobra.\", \"weaknesses\": \"1. The novelty is limited with the following reasons: (1) The model is based on Cobra, with minor changes on the concatenation of visual features and projector. (2) The scan directions are from VMamba, only the stitch-scan is novel.\\n\\n2. Although the sequence-level concatenation improves the performance, it poses a great concern on the efficiency of the model, but the authors did not provide the inference speed, computational cost, and memory cost comparisons. Though the Mamba has linear computational complexity, longer sequence indeed increases the FLOPs and memory consuption, and the heavy projector also introduces additional costs. As a result, directly compare the model with existing methods such as Cobra without comparing the efficiency is **unfair**.\\n\\n3. In Figure 1, directly comparing LLaVA-1.5 with MambaVLM to demonstrate the effective of Mamba and the superiority on training time is unfair, as MambaVLM uses better DINOv2-SigLIP encoder.\\n\\n4. In lines 215~235, \\\"regardless of how many channels ... loss of visual information\\\" is overstated, lacking precise theoretical evidence to support the claims. Bottleneck-structures are widely used in networks such as ResNet, and according to information bottleneck principle, it is no clear evidence to state that the compression of channels will definitely lose the valuable information. Please reword.\\n\\n5. In Table 1, some results (62.6, 76.3) of MambaVLM is not the best and should not be bolded. Please correct them.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"MambaVLM is a highly efficient multi-modal large language model framework that integrates Mamba\\u2019s linear complexity with a novel cross-stitch scanning approach to improve both visual information interaction and vision-language alignment. Achieving competitive benchmark results with only 0.66 million data points and 14 hours of training on a single A800 node, MambaVLM significantly outperforms LLaVA-1.5 and rivals the performance of Qwen-VL, demonstrating Mamba\\u2019s potential in enhancing MLLM efficiency and effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method in the paper performs very well, achieving better performance than LLaVA 1.5 with only half the training time.\\n2. The approach is ingenious, using Mamba for long-context vision-language modeling is a promising avenue worth exploring.\\n3. The paper is written with a clear structure.\", \"weaknesses\": \"1. The performance comparison with the original LLaVA is somewhat unfair, as the method in the paper uses two visual encoders. It would be better if a version with only ViT-CLIP could be provided.\\n2. The method description in the paper is unclear; perhaps I missed where it explains how Mamba-VLM + Vicuna is implemented. It seems that if Vicuna is used, only the Mamba projector is related to Mamba. Of course, I also understand that the performance of VLMs is highly dependent on the performance of the LLM, and Mamba as an LLM is still relatively weak.\", \"questions\": \"Please see the section on weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces MambaVLM, a novel framework that utilizes the Mamba model, a state-of-the-art selective structured state space model renowned for its linear computational complexity and its efficiency in managing long sequences. The authors enhance the Mamba model by incorporating visual long sequences and a cross-stitch scanning mechanism, specifically tailored to boost interaction and alignment between visual and linguistic data. Through extensive experiments and qualitative analyses, they establish MambaVLM not only as a powerful tool for MLLM tasks but also as a pioneering approach that sets a new benchmark for future research in the field.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors develop visual long sequences that enhance representation capabilities, ensuring more robust and detailed visual data processing.\", \"The authors introduce an innovative cross-stitch scanning mechanism designed to improve the interaction between visual and linguistic data, optimizing vision-language alignment.\", \"The authors present MambaVLM-a robust and streamlined MLLM framework.\\u00a0Their extensive testing across various benchmarks validates the effectiveness of their approach.\"], \"weaknesses\": [\"The contributions are vague; it would be better to clearly summarize the contributions of this paper at the end of the Introduction. This article simply replaces the traditional MLLM with the Mamba model, and the proposed Stitch-Scan is merely a data augmentation stitching method.\", \"The experiments are insufficient. The core argument of this article is: \\\"we first construct visual long sequences with multiple vision encoders, which not only enrich visual representations but also leverage the advantages of Mamba in handling long sequences. Notably, this design will not undermine the efficiency obviously, which is in stark contrast with the common cognition of Transformer-based MLLMs.\\\" Is there any experimental or theoretical support for this conclusion? How much is \\\"not undermining the efficiency obviously\\\" specifically? It is recommended that a row be added to Table 4 so that the visual tokens of MambaVLM and MobileLLaMA-2.7B are also consistent at 144, which would support the above point.\", \"Formula 7 is expressed non-standardly; do not mix mathematical symbols with code.\", \"In Formula 8, Hv = Merge(Hv1, Hv2, Hv3, Hv4), the Merge method is not explained in the text. What specific merging technique is used, just a simple concatenation?\", \"In Table 1, the Qwen-VL model outperforms MambaVLM in performance on TextVQA and VQAv2 with a data scale of 665K. Typically in papers, bold numbers indicate the best results obtained by models, but this is not the case in your table. If the bold numbers have a special meaning, please explain this in the text. Additionally, the same issue occurs in Table 2.\"], \"questions\": \"Please see weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
09TI1yUo9K
Noise is More Than Just Interference: Information Infusion Networks for Anomaly Detection
[ "Hanzhe Liang", "Can Gao", "Jinbao Wang" ]
3D anomaly detection is a crucial task in computer vision, aiming to identify anomalous points or regions from point cloud data. However, existing methods may encounter challenges when handling point clouds with high intra-class variance, especially for methods that rely on registration techniques. In this study, we propose a novel 3D anomaly detection method, termed Information Gain Block-based Anomaly Detection (IGB-AD), to address the challenges of insufficient anomaly detection information and high intra-class variance. To extract ordered features from 3D point clouds, the technique of Rotation-Invariant Farthest Point Sampling (RIFPS) is first introduced. Then, an Information Perfusion (IP) module composed of stacked Information Gain Blocks (IGB) is proposed to utilize prior noise to provide more distinguishing information for the features, where IGB is designed to utilize noise in a reverse-thinking manner to enhance anomaly detection. Finally, a Packet Downsampling (PD) technique is developed to preserve key information between multiple clusters to solve the complex downsampling situation. The main purpose of the framework is to utilize the effective information within prior noise to provide more detection criteria for anomaly detection. In addition, an Intra-Class Diversity (ICD) 3D dataset is constructed, which contains multiple categories with high class-variance. Experimental results show that the proposed IGB-AD method achieves the State-Of-The-Arts (SOTA) performance on the Anomaly ShapeNet dataset, with an P-AUROC of 81.5% and I-AUROC of 80.9%, and also gains the best performance on the ICD dataset, with an P-AUROC of 57.4% and I-AUROC of 60.2%. Our dataset will be released after acceptance.
[ "Self-supervised learning", "Anomaly detection" ]
https://openreview.net/pdf?id=09TI1yUo9K
https://openreview.net/forum?id=09TI1yUo9K
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zTk80ZrXIU", "x8783zmbt5", "jStl7qakUu", "Sej6O7YPGT", "LqtNL6NAiQ" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730588227862, 1729944569482, 1730480602931, 1729787206957, 1731657306251 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission739/Reviewer_doed" ], [ "ICLR.cc/2025/Conference/Submission739/Reviewer_bPxg" ], [ "ICLR.cc/2025/Conference/Submission739/Reviewer_6AHZ" ], [ "ICLR.cc/2025/Conference/Submission739/Reviewer_2S64" ], [ "ICLR.cc/2025/Conference/Submission739/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose a novel method that uses a noise prior to learn to improve the features of a handcrafted descriptor FPFH. The FPFH features are reformulated through a series of Information Gain blocks that attempt to extract useful information from noised FPFH features thus decoupling the noise from the useful information contained within the features. The extracted features are then used to create a memory bank which is used at inference for anomaly score estimation. A packet downsampling process is also proposed, which is a Mahalanobis distance-based greedy coreset sampling mechanism that better samples features in cases where the observed class is composed of several subclasses.\\n\\nThe authors also propose a new dataset, ICD, where each class is composed of several subclasses providing a unique challenge for 3D anomaly detection methods.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Interesting method that aims to improve existing handcrafted features which seems novel.\", \"The proposed method achieves state-of-the-art results on the AnomalyShapeNet dataset and on the newly proposed ICD dataset.\", \"Most sections of the paper are well written and easy to follow despite the proposed method being constructed of several components.\"], \"weaknesses\": \"Some implementation details, such as how noise is injected and sampled during the pretraining phase, could be included as it would improve clarity for the reader. Some implementation details are included in the supplementary but could be moved to the main paper.\\n\\nThe results of the comparison of the proposed method to related works could be discussed in more detail. Currently the results on AnomalyShapeNet and ICD are only briefly listed in Section 4.3, however no discussion of the results is given.\\n\\nOn the ICD dataset the current SOTA on AnomalyShapeNet is not evaluated (R3D-AD) and the second best method is a vanilla PatchCore using FPFH features which generally does not achieve SOTA results on 3D anomaly detection benchmarks. Given that the ICD dataset is one of the claimed contributions of this paper the evaluation should be more thorough and the discussion of the results more detailed.\\n\\nThe ablation study is done on the newly proposed ICD where the performance is very low (0.6 AUC). This makes it difficult to really evaluate the components of the method since most anomalies are already missed and the difference between most experiments is less than 1% AUROC.\\n\\nOverall I believe the experimental section is the most lacking. There is a lack of discussion of the results on both the AnomalyShapeNet and the ICD dataset. Additionally, the evaluation on the ICD dataset could be more thorough. Methods that are included in the AnomalyShapeNet experiments are not included in the ICD experiments. The results are not properly discussed. Only image-level AUROC is used for the evaluation in Section 4.3 but in the ablation study (Section 4.4) other metrics are also used. The ablation study should also be done on AnomalyShapeNet to get a clearer picture of the impact of each design choice.\", \"questions\": \"In Eq. 7, L_richness. It would be useful to give dimensions of X and F. Possibly also to rewrite the equation to make the way this is calculated easier to understand.\\n\\nIn Sec 3.3. - How is noise Z added? Is it sampled once and used for all blocks?\\n\\nIn Eq. 11, which features are max(s) and min(s) calculated from in the normalization?\\n\\nWhy no M3DM comparison or comparison on MVTec3D or on Real3DAD that have been published and are more widely cited?\\n\\nThe BTF method achieves an extremely low AUROC score on the ICD dataset showing a strong correlation between the anomaly score and the normality of the example which may be interesting and should be commented on given that the dataset is one of the contributions.\\n\\nThe discussion of the experimetnal results could be expanded.\\n\\nWhy are the results on the ICD dataset relatively low in terms of the AUROC scores?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript claims that most existing 3D anomaly detection methods require the usage of registration to preprocess point clouds and exhibit high intra-class variance. To this end, it proposes IGB, IGB-AD, RIFPS, IP, and PD module to enhance 3D anomaly detection and alleviate these two challenges. Furthermore, it develops an Intra-Class Diversity (ICD) 3D dataset with multiple subclasses. Moreover, the proposed method achieves the state-of-the-arts performance on one public dataset and the proposed dataset.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1.This manuscript proposes IGB, IGB-AD, RIFPS, IP, and PD module to enhance 3D anomaly detection.\\n\\n2.This manuscript introduces the ICD dataset for 3D anomaly detection. Different from existing datasets, it includes multiple sub-classes.\\n\\n3.The method proposed in the manuscript achieves the state-of-the-arts performance on one public dataset and the proposed dataset\", \"weaknesses\": \"1.The citation format is incorrect, with many references needing to be placed in parentheses. The authors should carefully read Section 4.1 of the Formatting Instructions for ICLR 2025 Conference Submissions.\\n\\n2.Lack of details on ICD datasets. Since the ICD dataset is the second contribution, the motivation for its creation should be described in the introduction section. \\n\\n3.The Introduction section could be better articulated. The author spends most of the Introduction describing the current issues with 3D anomaly detection but does not explain how their proposed method effectively addresses these challenges. Deeper insights need to be provided.\\n\\n4.In Page-2 Line-68, R3D-AD reconstructs normal samples from pseudo abnormal point clouds using a Diffusion model and cannot be categorized as a distillation method.\\n\\n5.Lack of experiments. 1) the ablation and comparison experiment on the proposed Rotation-Invariant Farthest Point Sampling (RFPS) 2)The performance of the proposed method on the Real3D-AD dataset.\", \"questions\": \"1. The definition of 'prior noise' is missing. The authors mention 'prior noise' in the abstract and introduction but do not provide a definition, nor is it described in the methods section.\\n\\n2. How does the proposed method tackle the challenge of high intra-class variance?\\n\\n3. In Page-2 Line-77, what is the link between extracting valuable information and high intra-class variance?\\n\\n4. In Table 3, the unit for \\\"Time Cost\\\" needs to be provided, whether it is seconds or milliseconds.\\n\\n5. Are there any hyperparameters in the proposed method? Are they sensitive?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose an Information Gain Block-based Anomaly Detection method to address the issue of high intra-class variance. They introduce Rotation-Invariant Farthest Point Sampling and an Information Perfusion module composed of Information Gain Blocks. The authors incorporate noise into 3D anomaly detection to provide more distinctive feature information. Additionally, they construct the Intra-Class Diversity (ICD) 3D anomaly detection dataset. The effectiveness of the method is validated on the constructed dataset and the ShapeNet dataset.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors propose an Information Gain Block-based Anomaly Detection method to address the issue of high intra-class variance. They introduce Rotation-Invariant Farthest Point Sampling and an Information Perfusion module composed of Information Gain Blocks. The authors incorporate noise into 3D anomaly detection to provide more distinctive feature information. Additionally, they construct the Intra-Class Diversity (ICD) 3D anomaly detection dataset\", \"weaknesses\": \"See questions.\", \"questions\": \"1. The authors seem to achieve better performance by stacking layers of IGB and increasing the number of MLPs within them. Is this performance improvement due to increased computational complexity?\\n2. In Table 3, the results without using IP and IGB appear to be better than those with IP and two layers of IGB. Please explain the effectiveness of IGB and IP.\\n3. The comparison methods in Table 1 differ from those in Table 2. It seems that the experimental results of CPMF, IMRNet, and R3D-AD on the ICD dataset are missing in Table 2. It is recommended that the authors include these results to demonstrate the reliability of the experiments.\\n4. The proposed dataset does not seem to have a significant advantage in terms of defect types and quantity. It appears to be a selection of a few subclasses from each category in the ModelNet dataset.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes Information Gain Block-based Anomaly Detection (IGB-AD) for 3D anomaly detection to address the challenges of insufficient anomaly detection information and high intra-class variance. Overall, the writing is not clear, and the experimental results fail to demonstrate the superiority of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper offers a comprehensive literature review.\", \"weaknesses\": \"it would be better to specify your title to include 3D anomaly detection or point cloud anomaly detection to be more specific.\\n\\nWhat is your definition of noise? The lack of definition makes the motivation hard for me to understand. like, \\\"Noise, as a prior source of information, consists of a combination of various types of data\\\", what is noise?\\n\\nWhy the teacher-student distillation networks are proposed to mitigate the effects of noise in Lines 65 and 67? I am not convinced by this claim. Like in 3DST, RD4AD, CDO, etc., is there any technique related to noise?\\n\\nthe description of the method is hard to understand as well. It would be better if you could improve the overview of your method a bit. Currently, I am not clear about your motivation for the framework, yet the relationships between the proposed components and the motivation are unclear.\\n\\nThe authors only conduct experiments on Anomaly-Shapenet and the established dataset. What about Real3D and MVTec 3D?\\n\\nWe can see in Table 1, that the proposed method can even perform worse than a simple baseline FPFH in some categories, which is confusing and fails to demonstrate the effectiveness of the proposed method.\\n\\nAlso, what about the point-level results? In Table 1 and Table 2, only object-level results are presented.\\n\\nThe ablation results in Table 3 fail to demonstrate the effectiveness of individual components since the variation is not significant enough. We can see that with only PD, the authors even achieve higher P-AUROC than some other variants like in rows 1, and 4 of Table 3.\", \"questions\": \"See the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
09LEjbLcZW
AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions
[ "Ziming Li", "Qianbo Zang", "David Ma", "Jiawei Guo", "Tianyu Zheng", "minghao liu", "Xinyao Niu", "Xiang Yue", "Yue Wang", "Jian Yang", "Jiaheng Liu", "Wanjun Zhong", "Wangchunshu Zhou", "Wenhao Huang", "Ge Zhang" ]
Data science competitions on Kaggle, which represent real-world programming challenges, require sophisticated problem-solving approaches. While LLM-based agents demonstrate potential in various fields, their application to data science tasks often falls short due to difficulties in adapting to data changes in multi-stage reasoning and the need for precise reasoning. To address this, we propose AutoKaggle, a robust and user-centric framework that solves Kaggle problems through a collaborative multi-agent cooperative system. AutoKaggle implements an iterative development process that combines code interpretation, debugging, and comprehensive unit testing covering over 30 tests, ensuring code correctness and quality through LLM-based evaluation. It prioritizes user experience by generating detailed reports that elucidate feature engineering processes, data transformations, model selection criteria, and the reasoning behind each decision. It offers customizable workflows, allowing users to intervene and modify each stage of the process, thus combining the advantages of automated intelligence with human expertise. Additionally, we build a universal data science tool library, including carefully verified functions for data cleaning, feature engineering, and modeling, which form the foundation of this solution. We evaluate the framework on 8 carefully selected Kaggle competitions, achieve 83.8\% in average completion rate and 42.8\% average rank in Kaggle.
[ "large language models", "language agents", "multi-agent" ]
Reject
https://openreview.net/pdf?id=09LEjbLcZW
https://openreview.net/forum?id=09LEjbLcZW
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zovDqarpW1", "y9TtXaRFkV", "xvdXj2Rjbw", "xvBju7M935", "svkUzP7bKD", "oUZmcrAzqt", "jtpHjCU4qm", "jCgIxnIVlU", "j9KrtSftGi", "i1twv8rGPl", "h4APB1yqPz", "g1uKO9hdz8", "fTk2sg6twf", "czjgayCTZu", "aqKwFWHly0", "YrF3SO0MbA", "XK3rHf8PIp", "OhHinl0k6l", "NS5qn9ITHW", "MsDZpsMUk3", "MOaXLnqFGK", "DFJCc5GMnw", "CQXM3sBH1c", "9K8xpENHAu", "9FY4wlKTCc", "7Sf8dFEk2v", "6Y7cgR5tRY", "51LdCo1AcX", "4XEH6Nk59G", "2nsaMTkDvX", "1Gl8uSGk1z", "19cpPrKwsE" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732575720729, 1733147020908, 1734718596610, 1732629738084, 1732468669048, 1732468816180, 1732554345154, 1732468932446, 1732880406118, 1733139139326, 1733142008306, 1732468770059, 1733141514592, 1732468256688, 1737524295889, 1732468975631, 1732880350208, 1732880207716, 1732469048233, 1730586049754, 1732468519418, 1732469331074, 1732880247884, 1732554406968, 1732469291184, 1732468610468, 1732469224168, 1730720475984, 1733139915403, 1732554429555, 1733191000914, 1730642740447 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14027/Reviewer_1dns" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Area_Chair_BU2a" ], [ "ICLR.cc/2025/Conference/Submission14027/Reviewer_zZNf" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Reviewer_zZNf" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Reviewer_u3cH" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Reviewer_1dns" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Authors" ], [ "ICLR.cc/2025/Conference/Submission14027/Reviewer_u3cH" ], [ "ICLR.cc/2025/Conference/Submission14027/Reviewer_zZNf" ] ], "structured_content_str": [ "{\"title\": \"Thank you for your clarifications and extra experiments\", \"comment\": \"Hi. Thank you for your additional work.\\nUnfortunately, this late answer does not allow me to check everything in details.\\nMy main question/concern is about the comparison with the new AIDE baseline. You report scores on each task, but it somehow gets a score of 0 on task 8. Why is that ? \\nI think that because of this one outlier, the reported mean is biased. What about the median?\\nIf I'd consider task 8 to be an outlier, what would be the conclusion of your experimental evidence ?\\n \\nI do not think that the paper should get rejected because it might perform worse than another approach, there might be qualities of AutoKaggle that AIDE does not possess, that would justify its selection on such tasks. \\n\\nI think that the user study would be an amazing addition to this work. I will augment my score before the end of the rebuttal, but I want to reevaluate the paper to do so, for which, I don't have the time now. \\n\\nCould you again list a short summary of the modifications (due to any reviewer's concern) ?\\nAnother great improvement for next time is if you write your modifications in e.g. blue in the paper, such that the reviewers can spot them easily (for your next rebuttal).\"}", "{\"comment\": \"Dear Reviewer zZNf,\\n\\nThank you for taking the time to carefully review the updates and for providing your thoughtful response. I appreciate your detailed consideration and respect your decision to maintain the score.\\n\\nIf there are any additional points or clarifications you would like me to address in the future, I would be happy to provide further information.\\n\\nThank you once again for your valuable feedback and for the effort you have put into reviewing my work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"metareview\": \"The paper presents AutoKaggle, an LLM creating a multi-agent system together with a library of hand-crafted ML tools in order to solve Kaggle problems. In my opinion, the reviewers did a very thorough job and have presented salient arguments about the suitability of this paper for IJCAI in its current form. While it is interesting to see that an agentic LLM can help with Kaggle competitions, one reviewer points out that a comparison to existing AutoML approaches is missing. Indeed, the authors added a comparison to AIDE. However, AutoML is not only \\\"an LLM agent that generates solutions for machine learning tasks\\\" (as written on the AIDE github page) but also portfolio and other approaches to automatize (parts of) ML. So, a larger discussion (and comparison) is in place. As it turn out Kaggle is partnering with the International Conference on Automated ML,see https://www.kaggle.com/automl-grand-prix. Another reviewer points out that some design choices and arguments for motivation are missing; currently, it reads more like what the authors have done, but it is not well placed into the research landscape. More importantly, an evaluation across different types of modalities (time series, text classification, image object detection, ...) is missing or argued why this is currently not so important. One reviewer also pointed out that an ablation study is missing, showing that the hand-crafted tools are not doing the job in the end. So, while the direction is super interesting, it is too early for publication, but we would like to encourage the authors to push for one of the next venues. Please note that the overall judgment should not be taken as a statement regarding the usefulness of your research.\", \"additional_comments_on_reviewer_discussion\": \"The discussion arose from issues raised in the reviews. Issues touched were (missing) baselines, user study, ablation study, and human-written unit-test. Overall, the rebuttal / discussion did not change the mind of the reviewers.\"}", "{\"comment\": \"Thank you for the detailed responses. However, I still have several major concerns, and some new questions have emerged according to the authors' responses:\\n\\n1. W3: As the experiment results demonstrate, the performance drops significantly without planner/summarizer, but their importance is not well-justified in the paper. However, the paper mainly discusses iterative debugging and testing process and the tool library.\\n\\n2. As mentioned in Sect 2.3 (line 233-245), the tools seem not notably better than common ML toolkits (am I correct for this? what's the special?), so why is the tools component highlighted as a core innovation in the main methodology section? As shown in Table 2, the impact of tools is less significant than removing planner/summarizer.\\n\\n3. W4 & W5: As your team manually developed approximately 40 unit tests specifically for tabular datasets (including file checks, data integrity, quality, feature engineering, and submission checks), this raises some concerns:\\na). If these tests were manually crafted for specific datasets, is it fair to compare with other frameworks that don't have such dataset-specific test support?\\nb). How would AutoKaggle generalize to new, unseen tasks where such carefully designed unit tests are not available?\\n\\n4. W7: Why does AutoKaggle take so long? For basic datasets like Task 1-3, where is the main time cost? Could you provide more detailed cost/API call statistics?\\n\\n5. W10: ML code generation should not be particularly challenging for GPT-4-mini, especially for Tasks 1-3. What are the main reasons for the failures in these cases since the result drops significantly?\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [1/5]\", \"comment\": \"Thanks for your valuable feedback and constructive comments. Below, we present our point-by-point response to the weaknesses and comments identified in our submission:\\n\\n> **W1: Limited novelty.** While the paper addresses data science problem-solving using LLM-based agents, it lacks a clear description of the specific challenges it intends to solve that existing methods have struggled with. Extending from a single-agent to a multi-agent system is insufficiently justified in this field, as the necessity and performance gains of such an approach are not clearly demonstrated. Existing works, as mentioned in the introduction, have also tackled similar problems with LLM-based agents, questioning the incremental contribution of AutoKaggle.\\n\\nThanks for your comments, which raise an important and valuable issue for discussion.\\n\\nCurrent automation solutions address data science problems from various angles, but our research distinguishes itself both in theme and in methodology:\\n\\n1. AutoKaggle focuses on end-to-end data science tasks, covering all phases, rather than just focusing on a single subtask like data analysis or visualization[1,2].\\n2. Many solutions rely heavily on pre-built expert knowledge bases[3] or historical data pools[4], limiting their scalability and setting a high usage barrier for users.\\n3. Traditional processes lack transparency and differ significantly from human thought patterns. The AIDE framework[6], which performs well in MLE-Bench[5], uses a rapid initial solution generation and iterative improvement approach. However, this one-time solution generation contrasts starkly with AutoKaggle\\u2019s phased detailed planning and multi-agent collaboration. AutoKaggle\\u2019s solutions are longer and more detailed, with traceable reasons for each data handling step (e.g., the removal of a feature due to previously identified excessive missing values), aligning more closely with human logical habits and enhancing comprehensibility.\\n\\nThe multi-agent design is adopted because of the inherent complexity of data science tasks. After decoupling tasks through a phased workflow, each step still requires meticulous planning, and complex information transmission between different phases must be managed. A single-agent design may lead to system complexity and entanglement, while a multi-agent design better decouples each phase\\u2019s tasks and divides complex information transmission into communications between different agents, making the process clearer and more efficient.\\n\\nPerformance-wise, AutoKaggle shows a notable enhancement with a 0.28 increase in the Valid Submission metric and a 0.180 improvement in the Comprehensive Score compared to the AIDE framework.\\n\\nIn summary, compared to previous works, AutoKaggle's contributions can be summarized as follows:\\n\\n1. End-to-end solutions for data science problems\\u3002\\n2. Enhancing system scalability and lowering user entry barriers.\\n3. Providing a transparent, human logic-aligned, and easily understandable solution.\\n4. Significantly improving performance in Valid Submission and Comprehensive Score metrics.\\n\\n[1] Zhang Y, Jiang Q, Han X, et al. Benchmarking Data Science Agents[J]. arXiv preprint arXiv:2402.17168, 2024.\\n\\n[2] Hu X, Zhao Z, Wei S, et al. Infiagent-dabench: Evaluating agents on data analysis tasks[J]. arXiv preprint arXiv:2401.05507, 2024.\\n\\n[3] Guo S, Deng C, Wen Y, et al. DS-Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning[J]. arXiv preprint arXiv:2402.17453, 2024.\\n\\n[4] Zhang L, Zhang Y, Ren K, et al. Mlcopilot: Unleashing the power of large language models in solving machine learning tasks[J]. arXiv preprint arXiv:2304.14979, 2023.\\n\\n[5] Chan J S, Chowdhury N, Jaffe O, et al. Mle-bench: Evaluating machine learning agents on machine learning engineering[J]. arXiv preprint arXiv:2410.07095, 2024.\\n\\n[6] https://github.com/WecoAI/aideml\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [3/5]\", \"comment\": \"> **W3: Role clarity of Planner and Summarizer.** Given AutoKaggle\\u2019s sequential, phase-based workflow, the necessity of a Planner agent is ambiguous. Can you quantify the contribution (such as on completion rates or error reduction) of this Planner agent in your system? Similarly, the Summarizer\\u2019s role in contributing to critical performance metrics such as completion rate or Best Normalized Performance Score, is not explicitly justified, leaving its impact on performance uncertain.\\n\\nThanks for the valuable feedback. The task planning in AutoKaggle is a combination of stage-based workflows and detailed planning performed by the **Planner**. The **Planner** is responsible for breaking down each stage into more granular tasks while maintaining consistency in data processing logic, making it an essential component of AutoKaggle. For example, in the **Data Cleaning** phase of Task 1 - Titanic, the **Planner** further divides this stage into the following tasks:\\n\\n1. **Handle Missing Values**: Identify and appropriately handle missing values in the dataset through imputation or removal. \\n2. **Treat Outliers**: Detect and address outliers that may distort the analysis or model performance. \\n3. **Ensure Consistency Across Datasets**: Check and enforce consistency in format, structure, and relationships between datasets. \\n4. **Save Cleaned Datasets**: Store the cleaned datasets for downstream tasks and ensure they are ready for subsequent processing.\\n\\nThis structured approach enables AutoKaggle to maintain a high level of organization and ensures that each step can be completed methodically and effectively. We hope this clarification provides better insight into the Planner\\u2019s role and the workflow design.\\n\\nThe **Planner** also identifies specific features for each step. For example, the **Handle Missing Values** task specifies features such as `Age`, `Cabin`, and **Embarked** to be addressed. The Planner review the report generated in the previous step and derive detailed information. Meanwhile, the **Summarizer** generates a **Report** at the end of each stage, serving as a key mechanism for information transfer. The report includes critical details such as changes in data features, file modifications, data processing results, and key findings from the current stage. This report is essential for the Planner in the next stage, enabling more precise planning based on the summarized insights from the previous stage.\\n\\nTo evaluate the importance of the **Planner** and **Summarizer**, we conducted an ablation study:\\n\\n1. **Removing the Planner**: In this setup, the Planner is removed, and each stage proceeds without a detailed plan. The **Developer** directly reads the code and outputs from the previous stage, summarizes them, and writes the code for the current stage independently. \\n2. **Removing the Summarizer**: In this setup, the summary reports are removed. The **Planner** creates plans for the next stage by directly reading the code and plans from the previous stage without a summarized report.\\n\\nThis ablation study highlights the critical roles of the Planner and Summarizer in ensuring efficiency and precision in AutoKaggle's workflow, demonstrating how their inclusion contributes to the system's overall effectiveness.\\n\\n| Task | Task1 | Task2 | Task3 | Task4 | Task5 | Task6 | Task7 | Task8 | Avg. |\\n|------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| **AutoKaggle** | 1 | 0.80 | 0.80 | 1 | 0.80 | 0.60 | 0.80 | 0.80 | 0.83 |\\n| **Without Planner** | 0.40 | 0 | 0.20 | 0.40 | 0 | 0.20 | 0.20 | 0.20 | 0.20 |\\n| **Without Summarizer** | 0.60 | 0.20 | 0.20 | 0.60 | 0.20 | 0.20 | 0.40 | 0.40 | 0.35 |\\n\\nThe results show that removing the **Planner** or **Summarizer** significantly decreased AutoKaggle's performance on the valid submission metric, with a drop of 0.63 and 0.48, respectively. This notable decline demonstrates the necessity of both the Planner and Summarizer. The decline in performance when the **Planner** was removed can be attributed to the implicit planning requirements within each development task. By decoupling the development tasks into two distinct steps\\u2014first, having the Planner create a detailed plan and then having the Developer execute the plan\\u2014we ensure consistent data processing logic throughout the workflow. This approach reduces the complexity of the Developer's task at each stage and improves the system's overall success rate.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for taking the time and effort to review our manuscript and providing such valuable feedback. As the discussion phase between authors and reviewers is nearing its conclusion, we would like to confirm whether our responses have adequately addressed your concerns.\\n\\nWe provided detailed answers to each of your comments one day ago, and we sincerely hope that our responses have sufficiently clarified your concerns. If you have any remaining doubts or require further clarification, please do not hesitate to let us know. We are more than willing to continue the discussion to address any questions you may have.\\n\\nThank you once again for your time and assistance!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [4/5]\", \"comment\": \"> **W4: Unit Test and Debugging.** Dose the Developer agent generate dataset-specific unit tests that align with each unique code snippet or not? How the Developer agent adjusts unit tests based on code variations to ensure logical consistency and accuracy across different tasks?\\n>\\n> **W5:** Lines 275-276 mention the importance of detecting logical errors in code, yet the method for achieving this is underexplored. Can you explain more details about detecting the logical error? More detail is needed on how logical errors are detected and avoided, as conducting exploratory data analysis or statistical checks after data cleaning or feature engineering alone may be insufficient.\\n\\nOur unit tests are developed by referring to [1]. These tests are not automatically generated by the Developer agent but are manually written and individually verified by our team. According to the findings in [2], providing feedback through appropriate unit tests can significantly enhance self-debugging effectiveness. Therefore, we have designed approximately 40 unit tests for tabular datasets, covering file existence checks, data integrity checks, data quality checks, feature engineering checks, and submission file checks. These tests ensure that the code at each stage accurately achieves its intended purpose.\\n\\nIn our study, we reviewed the Data Cleaning, Feature Engineering, Model Building, Validation, and Prediction stages across eight competitions, totaling 8 x 3 x 2 = 48 stage results. All stages that passed the unit tests showed no logical errors and successfully met the objectives. For example, in the data cleaning stage, passing the unit tests means there are no missing or outlier values, no duplicate entries or features, and the cleaned training and test sets differ only in the target variable.\\n\\nLogical consistency is ensured by the Planner module. At each stage, the Planner refers to the following information for planning:\\n\\n1. The report from the previous stage\\n2. The plan from the previous stage\\n\\nThis approach ensures that each stage remains consistent with the data processing logic of the previous tasks.\\n\\n[1] Zhang Y, Pan Y, Wang Y, et al. PyBench: Evaluating LLM Agent on various real-world coding tasks[J]. arXiv preprint arXiv:2407.16732, 2024.\\n\\n[2] Chen X, Lin M, Sch\\u00e4rli N, et al. Teaching large language models to self-debug[J]. arXiv preprint arXiv:2304.05128, 2023.\\n\\n---\\n\\n> **W6:** Table 2 illustrates the system's performance across different debugging attempts (DT), showing how increased debugging impacts metrics like Completion Rate (CR) and Comprehensive Score (CS). The data indicate that both CR and CS improve as DT rises, reflecting enhanced task completion and accuracy with more debugging opportunities. What the 'performance plateaus' mean in line 524-525?\\n\\nThanks for your valuable feedback. The term \\\"performance plateaus\\\" here only refers to performance, which means the two indicators Valid Submission and Comprehensive Score. We have amended the original description to \\\"performance\\\" to avoid any possible ambiguity.\\n\\n---\\n\\n> **W7:** The paper does not provide information on the cost of running AutoKaggle, which is essential for evaluating its performance and practical applicability. It's benifit to provide cost and total runtime to understand the performance.\\n\\nThe average cost for AutoKaggle to complete a Kaggle competition is $3.13. The runtime is significantly affected by the data volume and hardware configuration. We conducted tests in an environment with a 13th generation Intel Core i9-13900H (20 CPUs). The specific average runtime is as follows:\\n\\n| Task | Duration |\\n|-------------------------|--------------|\\n| Task 1 - Titanic | 29 mins |\\n| Task 2 - Spaceship Titanic | 38 mins |\\n| Task 3 - House Prices | 27 mins |\\n| Task 4 - Monsters | 22 mins |\\n| Task 5 - Academic Success | 1h 33 mins |\\n| Task 6 - Bank Churn | 2h 27 mins |\\n| Task 7 - Obesity Risk | 48 mins |\\n| Task 8 - Plate Defect | 56 mins |\\n\\n---\\n\\n> **W8:** The chosen baselines are not entirely convincing. Recent similar works, AIDE[1] and MLE-Agent[2] have shown remarkable capability in Kaggle competition settings. A comparative analysis with these recent works, particularly focusing on AutoKaggle\\u2019s unique advantages in effectiveness, efficiency, or other performance metrics, would highlight its distinct contributions to the field.\\n\\nThanks for your valuable suggestions. For this issue, please refer to the first point in the **General Response - Common Problems**.\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [2/2]\", \"comment\": \"> **W7:** Why does AutoKaggle take so long? For basic datasets like Task 1-3, where is the main time cost? Could you provide more detailed cost/API call statistics?\\n\\nThanks for your excellent questions! Taking Task 1 - Titanic as an example, I am providing a detailed time cost report below:\\n\\n| **Phase** | **Reader** | **Planner** | **Developer** | **Reviewer** | **Summarizer** | **Total Time Cost** |\\n|---------------------------|------------|-------------|---------------|---------------|----------------|---------------------|\\n| **Understand Background** | 21s | \\\\ | \\\\ | 5s | \\\\ | 26s |\\n| **Preliminary EDA** | \\\\ | 47s | 1min 1s | 13s | 1min 5s | 3mins 6s |\\n| **Data Cleaning** | \\\\ | 50s | 2mins 20s | 13s | 23s | 3mins 46s |\\n| **In-depth EDA** | \\\\ | 41s | 3mins 10s | 12s | 1min 10s | 4mins 13s |\\n| **Feature Engineering** | \\\\ | 1min 13s | 24s | 15s | 31s | 2mins 23s |\\n| **Model Building, Validation, and Prediction** | \\\\ | 57s | 10mins 11s | 35s | 32s | 12mins 15s |\\n| **Total** | 21s | 4mins 28s | 17mins 6s | 1min 33s | 3mins 41s | **27mins 9s** |\\n\\nFrom the detailed time cost, it is evident that the **Developer agent consumes the majority of time, accounting for 63% of the total**. This proportion becomes even higher for datasets with larger data volumes. The main time cost lies in running code (e.g., data processing, model training), while the other four agents show relatively consistent time consumption across different tasks.\\n\\nAdditionally, due to AutoKaggle's stage-based execution and its comprehensive reporting feature, even for simple datasets, it performs detailed analyses, resulting in a baseline time cost. This is further influenced by the local environment used for these experiments. The speed of running code (e.g., data processing, model training) was limited by the suboptimal performance of the local hardware, thereby increasing AutoKaggle's overall runtime.\\n\\nTo summarize, the primary reasons for the time cost are as follows: \\n\\n1. **Comprehensive stage-based analysis**: AutoKaggle performs detailed analyses for each stage and generates thorough reports. Even for simple datasets, this results in a baseline time cost. \\n2. **Hardware performance limitations**: These experiments were conducted on a local machine, where the speed of running code was constrained by the hardware's suboptimal performance. If executed on a high-performance server, AutoKaggle's runtime would be significantly reduced. \\n\\nWe assure you that in future experiments, we will verify performance on high-end servers.\\n\\n---\\n\\n> **W10:** ML code generation should not be particularly challenging for GPT-4-mini, especially for Tasks 1-3. What are the main reasons for the failures in these cases since the result drops significantly?\\n\\nThanks for your insightful question!\\n\\nWe analyze the reasons behind GPT-4-mini's failures, and we find that almost all errors occurred due to its inability to correctly read the `train.csv` and `test.csv` files. This highlights a common issue faced by automated frameworks in real-world scenarios\\u2014failure to correctly locate file paths. If GPT-4-mini were directly provided with pre-loaded datasets, such as `train_df = pd.read_csv('train.csv')`, it has the capability to generate code to perform data cleaning. However, in real execution environments, its limited ability to comprehend long contexts prevents it from accurately identifying information about file locations within the given context, leading to errors at the very first step.\\n\\nThis also reveals areas where AutoKaggle can be improved. We commit to further optimizing the design of AutoKaggle\\u2019s architecture to enable even relatively weaker base models to complete the entire data science pipeline in real-world environments. \\n\\nThanks again for your valuable questions and comments! And I sincerely hope our explanation provides you with greater clarity.\"}", "{\"comment\": \"Dear Reviewer 1dns,\\n\\nThank you for your time and effort in reviewing our paper, as well as for your constructive feedback and valuable suggestions. We sincerely appreciate the thoughtfulness you have brought to the review process.\\n\\nAs the rebuttal period concludes today, we would like to kindly remind you of your earlier comment: \\\"I think that the user study would be an amazing addition to this work. I will augment my score before the end of the rebuttal, but I want to reevaluate the paper to do so, for which, I don't have the time now.\\\" We truly hope that our recent response has addressed your concerns and clarified the points raised.\\n\\nIf our clarifications meet your expectations, we would greatly appreciate your consideration in reevaluating the score. However, if additional questions remain, we would be happy to provide further clarifications within the limited time left.\\n\\nThank you again for your valuable input, which has greatly contributed to improving our work.\\n\\nBest regards, \\n\\nAuthors\"}", "{\"comment\": \"Thank you for the additional results. After carefully reviewing the updates, I have decided to maintain my score.\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [2/5]\", \"comment\": \"> **W2: Multi-agent system design.** The multi-agent system, including agents like Reader, Planner, Developer, Reviewer, and Summarizer, is insufficiently explained in terms of its collaborative structure. It is unclear whether these agents operate in an assembly-line fashion or if they engage collectively in each phase under the \\\"Cooperative Engagement\\\" label in Figure 1. Further clarification on their integration and interdependence within each workflow phase is needed.\\n\\nThanks for the feedback and suggestions. We conclude here to address the concern regarding the clarity of the multi-agent system design in AutoKaggle. The system operates using a \\\"collaborative participation\\\" approach, as defined in [1][2]. These agents collaborate in an orderly and cyclical manner, ensuring the task completion at each stage efficiently. Specifically:\\n\\n1. **Understand Background Phase**: \\n 1. In this phase, only two agents, **Reader** and **Reviewer**, are involved. The **Reader** collects and comprehends background information, while the **Reviewer** examines it to ensure its accuracy and completeness.\\n2. **Subsequent Phases**:\\n 1. In the later phases, four agents\\u2014**Planner**, **Developer**, **Reviewer**, and **Summarizer**\\u2014collaborate to complete the tasks. The workflow is as follows: \\n 2. **Planner**: At the start of each phase, the Planner formulates a task plan based on the previous phase's plans and reports, determining the tools and methods required. \\n 3. **Developer**: The Developer executes the tasks outlined by the Planner, including writing code, running programs, and conducting tests. \\n 4. **Reviewer**: Once the code runs successfully and passes all tests, the Reviewer audits the code's quality, checking for logical errors. If the Reviewer detects any issues, the Reviewer provides feedback and returns it to the Planner. The Planner then revises the plan, and the Developer proceeds with redevelopment. \\n 5. **Summarizer**: After the code passes review, the Summarizer compiles a summary of the key outcomes of the phase, including data changes, file modifications, and the specific reasons for data operations. \\n\\nThis collaborative model ensures that tasks at each stage are efficiently completed through the combined efforts of multiple agents, laying a solid foundation for the tasks in the next phase. We hope this explanation clarifies our system's design and the agents' collaborative interactions.\\n\\n[1] Li G, Hammoud H, Itani H, et al. Camel: Communicative agents for\\\" mind\\\" exploration of large language model society[J]. Advances in Neural Information Processing Systems, 2023, 36: 51991-52008.\\n\\n[2] Xi Z, Chen W, Guo X, et al. The rise and potential of large language model based agents: A survey[J]. arXiv preprint arXiv:2309.07864, 2023.\"}", "{\"comment\": \"Dear Reviewer u3cH,\\n\\nThank you for your valuable feedback on our paper. We deeply appreciate the time and effort you have dedicated to the review process.\\n\\nAs the rebuttal period concludes today, we kindly ask if you could find a moment to review our responses to your comments. We have provided point-by-point replies to your concerns, including:\\n\\n1. Addressing the baseline issues. \\n2. Refining the expressions in the paper where optimization was suggested. \\n3. Clarifying the points related to COT and ReAct. \\n4. Discussing the fully automated nature of AutoKaggle, including human-in-the-loop elements and unit tests. \\n5. Resolving questions about the evaluation metrics and experimental results. \\n\\nWe believe these responses effectively address the issues raised and would be grateful for your feedback. If any further clarifications are needed, we are more than willing to provide them within the limited time remaining.\\n\\nThank you once again for your valuable insights, which have greatly contributed to improving our work.\\n\\nBest regards, \\n\\nAuthors\"}", "{\"title\": \"General Response\", \"comment\": \"We extend our sincere appreciation to each of the three reviewers for the time dedicated to reviewing our manuscript and for the constructive feedback provided. Your insights and critiques are invaluable.\\n\\n**Common Problems**\\n\\nIn reviewing the feedback from the reviewers, we identified and addressed two recurring themes:\\n\\n1. Lack of Evaluation and Baseline Comparison for AutoKaggle \\n\\nWe have added comparative experiments with the AIDE framework and included results and corresponding analysis of the AutoKaggle framework based on o1-mini in Section 3.2-Main Results. AIDE[1] is the top-performing framework in MLE-Bench[2]. The new main experimental results are presented in Section 3.2-Main Results(Table 1). The results demonstrate that AutoKaggle significantly outperforms the AIDE framework in terms of valid submission rate and comprehensive score across the eight datasets we evaluated.\\n\\n1. Special Optimization of AutoKaggle for Evaluation Datasets \\n\\nAutoKaggle is not specifically optimized for any particular dataset. It is designed to provide a general end-to-end solution for all types of tabular datasets. Our goal is to simplify and optimize the data science workflow, enabling data scientists to handle daily tasks more efficiently. The development of AutoKaggle was based on three toy datasets from [3], rather than the Kaggle competitions we used for testing.\\n\\n**Other Revisions**\\n\\nIn addition to incorporating a comparison with AIDE, we have made the following revisions to the article based on the reviewers' feedback:\\n\\n1. Revised Evaluation Metrics: We updated the metrics to include Made/Valid Submission (sourced from MLE-Bench) and Comprehensive Score (sourced from [4]). The calculation method for the Comprehensive Score has been adjusted to 0.5 \\u00d7 Valid Submission + 0.5 \\u00d7 ANPS to better align with the evaluation of our framework.\\n2. Added Appendix B-Error Analysis: This section provides a detailed analysis of the error distribution encountered by AutoKaggle during the completion of data science tasks. It also describes the code correction methods used within AutoKaggle.\\n3. Added Appendix E-Case Study: We included a case study using the Titanic competition from Kaggle. This section details the phased workflow of AutoKaggle and presents some intermediate results to enhance understanding of the technical details of AutoKaggle.\\n4. Enhanced README in Anonymous GitHub Repository: We improved the README file to better explain how to use AutoKaggle. Additionally, sample results have been provided in the multi_agents/example_results/ directory for review.\\n\\nMore questions have been answered in the point-to-point responses. We hereby assure you that all of them will be resolved in the manuscript.\\n\\nCollectively, we anticipate that, within our point-to-point responses, the detailed explications of our method and the inclusion of supplementary baseline comparisons will clarify the misunderstandings and substantively address the feedback.\\n\\nWe extend our heartfelt gratitude once again to all the reviewers for their meticulous and insightful critiques.\\n\\n[1] AIDE: https://github.com/WecoAI/aideml\\n\\n[2] Chan J S, Chowdhury N, Jaffe O, et al. Mle-bench: Evaluating machine learning agents on machine learning engineering[J]. arXiv preprint arXiv:2410.07095, 2024.\\n\\n[3] Hong S, Lin Y, Liu B, et al. Data interpreter: An LLM agent for data science[J]. arXiv preprint arXiv:2402.18679, 2024.\\n\\n[4] https://github.com/geekan/MetaGPT/tree/2b160f294936f5b6c29cde63b8e4aa65e9a2ef9f/examples/di\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [5/5]\", \"comment\": \"> **W9:** A broader evaluation across various task types such as time series prediction, image classification, and text classification, are necessary, as these are critical and challenging categories in Kaggle competitions. The current experiments focus primarily on tabular datasets, leaving it unclear whether AutoKaggle is capable of handling more complex, domain-specific tasks. Can AutoKaggle complete such tasks?\\n\\nThanks for your valuable feedback. We fully understand the importance of conducting a broader evaluation of AutoKaggle across different task types, particularly those that are complex and commonly found in Kaggle competitions, such as time series prediction, image classification, and text classification.\\n\\nThe architecture of AutoKaggle is based on a phase-based workflow and a multi-agent system, which is inherently applicable to all data science processes, not just limited to tabular datasets. Our developers use predefined tool libraries for development, meaning that by extending the machine learning tool library to include tools for time series, image, and text processing, AutoKaggle can handle these more complex task types.\\n\\nWe are actively working on expanding AutoKaggle's capabilities and plan to introduce new tools and techniques to support tasks in these areas. We look forward to showcasing these developments in the near future and demonstrating AutoKaggle's ability to handle a variety of complex, domain-specific tasks.\\n\\n---\\n\\n> **W10:** What is the requirement of the LLM? Can AutoKaggle works well with gpt-3.5 or other open-sourced models?\\n\\nThanks for your great question. The base models for each Agent in AutoKaggle are different: Reader/Reviewer/Summarizer all use the GPT-4o-mini model, Developer uses the GPT-4o model, while Planner is based on the GPT-4o/o1-mini model. Tasks like Reader/Reviewer/Summarizer, which mainly involve summarizing text information and writing reports, have lower requirements for the base model, and can be handled by either the GPT-4o-mini or equivalent open-source models. For Agents like Developer/Planner, which require planning abilities (logical reasoning) or coding skills, the base model needs to be an open-source model of the same level as GPT-4o. After replacing Developer with GPT-4o-mini, the performance of AutoKaggle is as follows:\\n\\n| Task | Task1 | Task2 | Task3 | Task4 | Task5 | Task6 | Task7 | Task8 | Avg. |\\n|------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| **AutoKaggle** | 0.20 | 0 | 0 | 0.20 | 0 | 0 | 0 | 0 | 0.05 |\\n\\nThanks again for your suggestions and valuable feedback, and I hope our explanation provides you with greater clarity.\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [1/2]\", \"comment\": \"Thanks for your valuable suggestions and questions! Below I will respond to each of your questions one by one:\\n\\n> **W3:** As the experiment results demonstrate, the performance drops significantly without planner/summarizer, but their importance is not well-justified in the paper. However, the paper mainly discusses iterative debugging and testing process and the tool library.\\n\\nThanks for your comments! Since phase-based workflows and multi-agent systems form the foundational architecture of AutoKaggle, the importance of the Planner and Summarizer is integral to this design, so we didn't do ablation study on them. During the development of AutoKaggle, we realized that building a customized machine learning tool library significantly enhances developers' problem-solving capabilities. Therefore, in the ablation study, we aimed to comprehensively demonstrate the impact of this customized tool library on performance. Similarly, our experiments on iterative debugging and testing processes are guided by the same idea, exploring how these components work together to optimize the system's overall performance.\\n\\nWhile this paper primarily focuses on the iterative debugging and testing processes and the tool library, we will enhance the revised version by explicitly emphasizing the critical roles of the Planner and Summarizer and providing additional analysis to better justify their contribution to the system's performance.\\n\\n---\\n\\n> As mentioned in Sect 2.3 (line 233-245), the tools seem not notably better than common ML toolkits (am I correct for this? what's the special?), so why is the tools component highlighted as a core innovation in the main methodology section? As shown in Table 2, the impact of tools is less significant than removing planner/summarizer.\\n\\nThanks for your comments! AutoKaggle aims not only to improve task completion rates and performance in data science but also to design a framework that is user-friendly, highly flexible, and customizable. The Machine Learning Tool Library plays a crucial role in achieving this goal.\\n\\nThis tool library can be regarded as a repackaging of existing Python-based data science libraries. By prompting LLM to generate utility functions tailored to specific needs, complete with type hints and robust error-handling mechanisms, users can easily incorporate customized utility functions based on their specific use cases (We show how to add custom tools in [1]). This is the foundation of AutoKaggle\\u2019s flexibility and extensibility, which is why we consider it one of the core innovations.\\n\\nWhile Table 2 shows that the impact of the tool library on performance is less significant compared to Planner/Summarizer, its unique value lies in enhancing the framework\\u2019s flexibility and user experience. We believe this makes it an indispensable component of AutoKaggle\\u2019s overall design philosophy.\\n\\n[1] https://anonymous.4open.science/r/AutoKaggle-B8D2/multi_agents/README.md\\n\\n---\\n\\n> **W4 & W5:** As your team manually developed approximately 40 unit tests specifically for tabular datasets (including file checks, data integrity, quality, feature engineering, and submission checks), this raises some concerns: a). If these tests were manually crafted for specific datasets, is it fair to compare with other frameworks that don't have such dataset-specific test support? b). How would AutoKaggle generalize to new, unseen tasks where such carefully designed unit tests are not available.\\n\\nThanks for your good questions!\\n\\nFirst, the development process of these unit tests is as follows: \\n\\n1. We prompt the LLM to propose general unit tests needed for the three stages of Data Cleaning, Feature Engineering, and Model Building, Validation, and Prediction.\\n2. The LLM then generate the unit tests for each functionality listed in step 1.\\n3. We manually verify these unit tests and tested them on three toy datasets from [1] to ensure their validity and completeness.\\n\\nTherefore, these unit tests are not manually crafted for any specific dataset but are general-purpose tests applicable to the Data Cleaning, Feature Engineering, and Model Building, Validation, and Prediction stages. This means they can be directly applied to other tabular datasets handled by AutoKaggle.\\n\\nSecondly, the manual involvement in this process is minimal. Most of the test design and generation is automatically completed by LLM, avoiding the complex processes that required substantial manual intervention in past work [2]. Therefore, there is no unfair comparison with frameworks that do not have similar unit test support.\\n\\n[1] https://github.com/geekan/MetaGPT/tree/2b160f294936f5b6c29cde63b8e4aa65e9a2ef9f/examples/di\\n\\n[2] Guo S, Deng C, Wen Y, et al. DS-Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning[J]. arXiv preprint arXiv:2402.17453, 2024.\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [1/2]\", \"comment\": \"Thanks for your valuable suggestions and guidance! Below I will respond to each of your questions one by one:\\n\\n> **Q1:** You report scores on each task, but it somehow gets a score of 0 on task 8. Why is that ? I think that because of this one outlier, the reported mean is biased. What about the median? If I'd consider task 8 to be an outlier, what would be the conclusion of your experimental evidence ?\\n\\nThanks for your question! Task 8 indeed has unique characteristics as it is a multi-target regression problem, while the first seven tasks are all single-target regression problems. For each task, we conducted five repeated runs under identical settings. If all runs failed, it indicates that the framework faces inherent difficulties in handling the task, rather than isolated anomalies. Compared to single-target regression, multi-target regression requires more complex feature interactions and processing. However, AIDE typically employs uniform encoding and transformation without detailed planning for feature interactions, which is the primary reason for its poor performance on Task 8.\\n\\nSince multi-target problems are relatively rare in Kaggle datasets, we reviewed 50 competition datasets and found only one such task. We performed additional evaluations on this dataset (Task 9), with the results shown below:\\n\\n| **Task** | **Framework** | **Valid Submission** | **Comprehensive Score** |\\n|----------|---------------|-----------------------|--------------------------|\\n| Task 9 | AutoKaggle | 0.80 | 0.752 |\\n| | AIDE | 0.20 | 0.452 |\\n\\nAs shown, AIDE still underperformed, further demonstrating its challenges in handling rare multi-target regression problems. Moreover, since AIDE does not provide detailed logic for its solution generation (as it uses a one-stop generation approach), we could not analyze its behavior patterns further.\\n\\nIt is worth noting that even if Task 8 is considered an outlier and excluded, AutoKaggle still outperforms AIDE by 0.17 on the Valid Submission metric and by 0.08 on the Comprehensive Score metric, which reinforces its superior performance compared to AIDE.\\n\\n[1] https://www.kaggle.com/competitions/playground-series-s3e26\\n\\n---\\n\\n> **Q2:** I think that the user study would be an amazing addition to this work.\\n\\nThanks for your valuable suggestion! Based on your advice, we evaluated the solutions of AIDE and AutoKaggle across eight criteria and invited five graduate students with Kaggle experience to assess them from a user perspective. The evaluation results are as follows:\\n\\n| **Criteria/Framework** | **AIDE** | **AutoKaggle** | **Winner** |\\n|--------------------------|----------|----------------|------------------|\\n| **Code Length** | 5 | 2.8 | AIDE |\\n| **Modularity** | 2 | 4.4 | AutoKaggle |\\n| **External Dependencies**| 4.8 | 3.6 | AIDE |\\n| **Feature Richness** | 2.4 | 5 | AutoKaggle |\\n| **Comment Coverage** | 3.2 | 4.4 | AutoKaggle |\\n| **Code Reusability** | 2.4 | 3.8 | AutoKaggle |\\n| **Comprehensibility** | 3.8 | 4.2 | AutoKaggle |\\n| **Overall (Average)** | 3.2 | 4.0 | AutoKaggle |\\n\\nWe have provided part of the solutions from AIDE and AutoKaggle for the same problem in [1], and the detailed user evaluation in [2] for your reference.\\n\\nFrom the results, AIDE performed better in **Code Length** and **External Dependencies**, while AutoKaggle excelled in **Modularity**, **Feature Richness**, **Comment Coverage**, **Code Reusability**, **Comprehensibility**, and the overall score.\\n\\nIn user evaluations, AIDE\\u2019s solutions were praised for their **conciseness, fewer external dependencies, and ease of understanding**, but were criticized for their **lack of modularity, limited functionality, fewer comments, and poor code reusability**. On the other hand, AutoKaggle\\u2019s solutions were noted to be **lengthy and reliant on custom tools**, but they stood out for their **clear modular design, rich functionality, ability to generate data analysis visualizations, detailed step-by-step comments, and strong code reusability**, which resulted in higher overall user satisfaction.\\n\\n[1] https://anonymous.4open.science/r/AutoKaggle-B8D2/user_study/comparison_results\\n\\n[2] https://anonymous.4open.science/r/AutoKaggle-B8D2/user_study/README.md\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [1/4]\", \"comment\": \"Thanks for your valuable feedback and constructive comments. Below, we present our point-by-point response to the weaknesses and comments identified in our submission:\\n\\n> **W1:** Whenever CoT is used as an interpretability tool, I think it's always wise to mention unfaithfulness e.g. https://arxiv.org/abs/2305.04388\\n\\nThanks for your suggestion, we have already cited it in our paper.\\n\\n---\\n\\n> **W2:** There are two places where a long list is hard to read: #1 ~L78: AutoKaggle integrates a comprehensive machine learning tools library, covering three core toolsets: data cleaning, feature engineering, and model building, validation, and prediction.\\n>\\n> \\\\#2 ~L186: The data science process is divided into six key stages: understanding the background, preliminary exploratory data analysis, data cleaning, in-depth exploratory data anal- ysis, feature engineering, and model building, validation, and prediction\\n>\\n> Perhaps \\\"model-building, -validation, and -prediction\\\" would be easier to read.\\n\\nThanks for pointing out the issues. We have optimized these expressions in the new version of the paper.\\n\\n---\\n\\n> **W3:** ~L146: I'm surprised not to see mentioned what seems to me to be the main thing underlying the motivation of multi-agent systems: finite context length, requiring summarisation and specialisation.\\n\\nThanks for your valuable feedback, and this is a great question. Regarding the issue of limited context length mentioned, we have implemented the following optimizations in AutoKaggle:\\n\\nIn our framework, the primary agents for each stage include the Planner, Developer, Reviewer, and Summarizer. Taking the Planner as an example, it receives the following inputs:\\n\\n1. **Report from the previous stage**: This is a summary generated by the Summarizer from the preceding stage, covering changes in files, variations in data features, and key findings from the previous phase. The average length of this input is approximately 1,872 tokens. \\n2. **Tool documentation**: We employ a retrieval-augmented generation (RAG) approach to extract only the necessary tool information from the overall documentation, limiting the length to within 4,996 tokens. \\n3. **Plan from the previous stage**: This input ensures consistency in data processing logic across stages and has an average length of about 1,453 tokens. \\n4. **The Planner's prompt information**: This is specific to the Planner and has an average length of 1,176 tokens. \\n\\n(The above averages are based on five iterations of AutoKaggle completing Task 1 of the Titanic competition.)\\n\\nIn summary, the total input length for the Planner is approximately 9,497 tokens, which is well within 1/10 of GPT-4o's context window of 128,000 tokens. Therefore, the unique compression of the input information is unnecessary. Furthermore, our design highlights careful information summarization and thoughtful input allocation to each agent in the framework.\\n\\n---\\n\\n> **W4:** It's not clear how much of the headline 43% on the leaderboard is down to the skill of the human-in-the-loop, which severely undermines the claim. Without a comparison to how well the human takes unassisted (in terms of success rate or time taken), or to how well AutoKaggle performs without HITL, it's impossible to reliably state how effective the framework is. \\n>\\n> Unspecified HITL also undermines the various claims of a \\\"fully automated framework\\\" (e.g. L175)\\n\\nThanks for your valuable feedback. We apologize for the confusion in our writing and figures. It is essential to clarify that the test results presented in the paper were achieved without any human intervention. One key feature of our framework is that it allows for manual adjustments. Users can modify parameters in the config.json file to enable customization at each stage. For example, users can manually revise the plan to better align with specific requirements or goals after the Planner completes its planning phase. This feature provides flexibility for tailoring the framework to diverse scenarios. We plan to conduct a user study where independent users will try AutoKaggle in their daily data science scenarios. They will provide feedback on the ease of setup and interaction.\"}", "{\"summary\": \"This paper presents a scaffolding framework which uses LLMs to create a multi-agent system, used to attempt Kaggle problems. They use a \\\"phase-based\\\" multi-agent approach, together with a library of hand-crafted ML tools, and extensive hand-crafted unit-tests tailored to the Kaggle problems.\\n\\nApplying this framework to 8 Kaggle problems (4 pre-GPT-4 training cut-off, 4 afterwards), they achieve a significant solve rate, and an average of 42% on the Kaggle leaderboard.\\n\\nThe paper also explores ablation of various modules (various tools, and the unit-testing module).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"A system which can score 43% on Kaggle leaderboards is a significant milestone on the path to automated coding and datascience. Additionally, since many challenges which such a system would face would also arise in more general task-completion (e.g. long-term planning, establishing coherency, managing context, preventing execution from looping) and so would transfer to improve AI agents in general.\\n\\nGreat collection of Classic and Recent challenges, and baselines seem reasonable (though see my Q about the Strong Baseline).\\n\\nIt's helpful to have this variety of scores (though see my Q about CS).\\n\\nArchitecture is clearly laid out, and the paper is overall very easy to read.\\n\\nClear exploration and explanation of the underlyring readon why the feature-engineering tools reduce the framework's score (many features, leading to more complexity than the agents can handle).\", \"weaknesses\": \"Whenever CoT is used as an interpretability tool, I think it's always wise to mention unfaithfulness e.g. https://arxiv.org/abs/2305.04388\", \"there_are_two_places_where_a_long_list_is_hard_to_read\": \"#1 ~L78: AutoKaggle integrates a comprehensive machine learning tools library, covering three core toolsets: data cleaning, feature engineering, and model building, validation, and prediction.\\n\\n#2 ~L186: The data science process is divided into six key stages: understanding the\\nbackground, preliminary exploratory data analysis, data cleaning, in-depth exploratory data anal-\\nysis, feature engineering, and model building, validation, and prediction\\n\\nPerhaps \\\"model-building, -validation, and -prediction\\\" would be easier to read.\\n\\n~L146: I'm surprised not to see mentioned what seems to me to be the main thing underlying the motivation of multi-agent systems: finite context length, requiring summarisation and specialisation.\\n\\nIt's not clear how much of the headline 43% on the leaderboard is down to the skill of the human-in-the-loop, which severely undermines the claim. Without a comparison to how well the human takes unassisted (in terms of success rate or time taken), or to how well AutoKaggle performs without HITL, it's impossible to reliably state how effective the framework is.\\n\\nUnspecified HITL also undermines the various claims of a \\\"fully automated framework\\\" (e.g. L175)\\n\\nNot much detail on these unit tests. Who writes them? What's the coverage like? Are there any guarantees? If (as I suspect) the \\\"meticulously designed\\\" unit tests are written by humans, then we have a similar situation as with the unspecified human-in-the-loop: the framework is not \\\"fully automated\\\", and it's impossible to rigorously determine how much effect the human hand-holding has on the framework's suggess. This should, at minimum, be clearly, explicitly and boldly acknowledged.\\n\\nAdditionally, it is unclear to me how much of the ML-tools library was developed alongside particular Kaggle Competition attempts. If the tools were developed on a case-by-case basis, to address hurdles found in the challenge, then there is significant data leakage from the evaluation dataset to the framework, leading to overfitting to the competitions chosen during development, and much of the headline 43% comes from tools handcrafted by human developers on a case-by-case basis. For a fair validation of how well this framework performs in \\\"fully automated\\\" mode, the library would need to be \\\"frozen\\\" while the framework was tested on a held-out set of Kaggle Competitions.\", \"very_minor_point\": \"~L350, I agree that there is a risk of data leakage for competitions from before Oct '23, however to say that GPT-4o's training data includes Classic Kaggle is an assumption: better to say simply that there is a risk of data leakage.\\n\\nIf you're considering data leakage, it would be worth flagging that the 42% includes Classic problems: using only the newer problems, performance is slightly below human average.\", \"questions\": \"~L140, you say that CoT improves reasoning at the expense of introducing hallucinations. Is there any evidence that CoT makes models any more or less likely to hallucinate?\\n\\n~L141, you say that the ReAct paradigm addresses hallucinations - that's not my understanding of what ReAct does or how it works, my understanding is that it combines thoughts and actions, yes, but that this has nothing to do with hallucinations or refining outputs.\\n\\n~L360: What is the difference between \\\"Success - Non-compliant\\\" and \\\"Success - Compliant\\\"?\\n\\n~L403: What's the justification / motivation for the complex / compound \\\"Comprehensive Score\\\"? How does it compare to other measures, what specifically does it achieve or avoid?\\n\\n~L431: could you say more about this \\\"strong baseline\\\" - I don't understand its construction.\\n\\nIf adding FE tools drops performance because FE adds too much complexity, then why does \\\"All tools\\\" (which presumably includes FE tools) recover this performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [1/2]\", \"comment\": \"Thanks for your valuable feedback and constructive comments. Below, we present our point-by-point response to the weaknesses and comments identified in our submission:\\n\\n> **W1:** **Lacking evaluation.** The evaluation is lacking comparison to existing AutoML baselines (*e.g. [1]) or explanations on why the authors are not comparing their method to any existing solution. If running such comparison is not possible at all, then the authors should provide explanations on why this is not feasible. While detailed reports are provided on their methods and the different components, as this works apply existing techniques, its evaluation is its core contribution. The authors should report (at least) the standard deviation, but e.g. violin plots to compare AutoKaggle's results of other kaggle competitors could help clearly situate where this automatic pipeline stands.\\n>\\n> **Q2:** Why cannot you compare to any existing baselines ?\\n\\nThanks for your valuable feedback. For this issue, please refer to the first point in the **General Response - Common Problems**.\\n\\n----\\n\\n> **W2\\uff1aEvaluation on a (previously) unknown dataset.** It seems that AutoKaggle has been designed to solve these datasets, so one cannot evaluate how much this method would transfer to another, previously unknown dataset. It would be nice to provide the reader with how much out of the box your method is, maybe with a user study. It seems like its your core contribution, so having independent people trying AutoKaggle and commenting on how easy the setup and interaction is on a left out dataset would help people looking for such solutions.\\n>\\n> **Q3:** Have you optimized the creation of your pipeline using these 5 kaggle competitions, or have you left out some of them, to evaluate on competitions you did not know at design time ?\\n\\nThanks for your valuable comments. We focus on the evaluation of AutoKaggle on unknown datasets. Note that AutoKaggle is not optimized for any specific datasets. AutoKaggle was developed based on the three toy datasets [1], not the Kaggle competitions we used for testing. We aim to provide a standard end-to-end processing solution for all tabular datasets. Another goal is to revise and optimize data science workflows so that data scientists can handle their daily tasks more efficiently.\\n\\nWe chose Kaggle competitions as our testing platform during the evaluation process because they are closely related to real-world data science applications. In Section 3.1-Task Selection, we explained how we selected eight evaluation datasets. These datasets cover a variety of data science tasks, including classification and regression with single-target and multi-target variables, to ensure a comprehensive assessment of AutoKaggle's capabilities.\\n\\nIn addition, our approach demonstrates robust performance and a high valid submission rate across various tasks. Our submission success rate (83%) has significantly improved across all 8 Kaggle tasks compared to the AIDE (28%). This result validates AutoKaggle's generalization capability and effectiveness.\\n\\nRegarding the usability of AutoKaggle, we plan to conduct a user study where independent users will try AutoKaggle in their daily data science scenarios. They will provide feedback on the ease of setup and interaction. This user study will help researchers better understand the applicability of our approach in different contexts. However, this is an independent research topic, with quantitative and qualitative evaluation methods fundamentally different from autonomous agents and multi-agent systems. We commit to report this aspect systematically in our future studies in information system or human-computer interaction conferences. Thanks again for the suggestions, and we will continue striving to improve our research.\\n\\n[1] https://github.com/geekan/MetaGPT/tree/2b160f294936f5b6c29cde63b8e4aa65e9a2ef9f/examples/di\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [4/4]\", \"comment\": \"> **Q4:** L403: What's the justification / motivation for the complex / compound \\\"Comprehensive Score\\\"? How does it compare to other measures, what specifically does it achieve or avoid?\\n\\nThanks for your good questions.\\n\\nThis metric is adopted from a previous research paper [1]. The **Comprehensive Score** is designed to balance two key aspects: **Valid Submission** and the **Average Normalized Performance Score (NPS)**. \\n\\n- **Valid Submission:** represents the proportion of cases where AutoKaggle successfully generates and submits valid predictions. \\n- **Average NPS:** measures the quality of the predictions made by AutoKaggle.\", \"there_are_scenarios_where_these_two_metrics_might_not_align\": \"1. **High Valid Submission, Low Performance**: AutoKaggle achieves a high proportion of valid submissions, but the submitted results yield average or poor performance. \\n2. **Low Valid Submission, High Performance**: AutoKaggle produces few valid submissions, but those submissions result in outstanding performance scores. \\n\\nTo address these discrepancies and provide a unified evaluation, the **Comprehensive Score** metric was introduced. It accounts for both the success rate of valid submissions and the quality of those submissions, offering a holistic measure of AutoKaggle's overall performance.\\n\\n[1] Hong S, Lin Y, Liu B, et al. Data interpreter: An LLM agent for data science[J]. arXiv preprint arXiv:2402.18679, 2024.\\n\\n---\\n\\n> **Q5:** L431: could you say more about this \\\"strong baseline\\\" - I don't understand its construction.\\n\\nThanks for your good question. The strong baseline was a phase-based workflow approach that did not utilize agents. However, this method has been removed in the updated version of the PDF. Instead, we have introduced a new comparison with AIDE to provide a more relevant and robust evaluation. You can refer to the first point in the **General Response - Common Problems** for more details.\\n\\n---\\n\\n> **Q6:** If adding FE tools drops performance because FE adds too much complexity, then why does \\\"All tools\\\" (which presumably includes FE tools) recover this performance?\\n\\nThanks for the great question. After incorporating feature engineering tools, the Feature Engineering phase creates more complex features, such as those derived through Principal Component Analysis (PCA) or Recursive Feature Elimination (RFE). While these advanced tools can enrich the feature set, they also pose challenges: \\n\\n1. Impact on Success Rate: The use of complex tools can reduce the success rate of the Feature Engineering phase due to increased processing complexity and potential errors. \\n2. Challenges for Subsequent Phases: Significant changes in data characteristics introduced during feature engineering can make the **Model-Building**, -**Validation**, and -**Prediction** phases more difficult.\\n\\nHowever, in our framework, the **Model-Building**, -**Validation**, and -**Prediction** phases are equipped with a one-stop model training, validation, and prediction tool. This tool simplifies these phases, as the Developer only needs to follow documentation and provide the correct parameters to complete the task. This streamlined process improves the success rate of these phases, thereby recovering overall performance.\\n\\nThanks again for your suggestions and valuable feedback, and I hope our explanation provides you with greater clarity.\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [2/2]\", \"comment\": \"> **Q3:** Could you again list a short summary of the modifications (due to any reviewer's concern) ? Another great improvement for next time is if you write your modifications in e.g. blue in the paper, such that the reviewers can spot them easily (for your next rebuttal).\\n\\nThanks for your valuable suggestions! Here is a summary of the modifications we made:\\n\\n1. **Added comparison experiments with the AIDE framework**: We included results and corresponding analyses of the AutoKaggle framework based on o1-mini in Section 3.2 (Main Results).\\n2. **Revised evaluation metrics**: Metrics were updated to Made/Valid Submission (referencing MLE-Bench) and Comprehensive Score (referencing [3]).\\n3. **Added Appendix B: Error Analysis**: This section provides a detailed analysis of the error distribution encountered during AutoKaggle's execution of data science tasks and describes its error correction methods.\\n4. **Added Appendix E: Case Study**: Using the Titanic competition on Kaggle as an example, we detailed the staged workflow of AutoKaggle along with some intermediate results to help clarify its technical details.\\n5. **Enhanced the README file in the anonymous GitHub repository**: We improved the explanation of how to use AutoKaggle and provided example results in `multi_agents/example_results/` for review.\\n6. **Added a user study part**: Five graduate students with computer science backgrounds and Kaggle experience evaluated the solutions generated by AutoKaggle and AIDE across seven dimensions. The detailed results can be found in the `user_study/README.md` file in the anonymous GitHub repository.\\n\\nAdditionally, we have marked all modified sections in the paper with red for clarity. Thanks again for your valuable suggestions and kind guidance! And I sincerely hope our explanation provides you with greater clarity.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for taking the time and effort to review our manuscript and providing such valuable feedback. As the discussion phase between authors and reviewers is nearing its conclusion, we would like to confirm whether our responses have adequately addressed your concerns.\\n\\nWe provided detailed answers to each of your comments one day ago, and we sincerely hope that our responses have sufficiently clarified your concerns. If you have any remaining doubts or require further clarification, please do not hesitate to let us know. We are more than willing to continue the discussion to address any questions you may have.\\n\\nThank you once again for your time and assistance!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [3/4]\", \"comment\": \"> **W7:** Very minor point: ~L350, I agree that there is a risk of data leakage for competitions from before Oct '23, however to say that GPT-4o's training data includes Classic Kaggle is an assumption: better to say simply that there is a risk of data leakage. If you're considering data leakage, it would be worth flagging that the 42% includes Classic problems: using only the newer problems, performance is slightly below human average.\\n\\nThanks for pointing out this issue. We have included an Ablation Study on competition dates in Section 3.3 of the paper. When faced with new competition datasets with no risk of data leakage, AutoKaggle\\u2019s performance shows a slight decline. However, it still maintains a competitive level of performance.\\n\\nAnd we modified Lines 264-265 of the article, changing \\\"GPT-4o's training data includes Classic Kaggle\\\" to \\\"posing a risk of data leakage\\\" to ensure the narrative's accuracy.\\n\\n---\\n\\n> **Q1:** L140, you say that CoT improves reasoning at the expense of introducing hallucinations. Is there any evidence that CoT makes models any more or less likely to hallucinate? \\n\\nThanks for the valuable feedback. Our intention was not to suggest that the chain-of-thought (CoT) approach is more likely to cause hallucinations but rather to emphasize that the issue of hallucinations remains unresolved. To avoid any ambiguity, we have revised our statement to:\\n\\n*\\\"While the chain-of-thought method enhances reasoning, it still faces challenges related to hallucinations and unfaithfulness, potentially due to internal representations.\\\"*\\n\\nWe hope this clarification addresses the concern.\\n\\n---\\n\\n> **Q2:** L141, you say that the ReAct paradigm addresses hallucinations - that's not my understanding of what ReAct does or how it works, my understanding is that it combines thoughts and actions, yes, but that this has nothing to do with hallucinations or refining outputs.\\n\\nThis conclusion is derived from the original ReAct paper [1]. ReAct ensures that tools can be used to verify the LLM's thought process at every step of an LLM's reasoning process, which helps reduce hallucinations. In the abstract section of the original paper, the authors state:\\n\\n*\\\"ReAct overcomes issues of hallucination and error propagation prevalent in chain-of-thought reasoning by interacting with a simple Wikipedia API.\\\"*\\n\\n[1] Yao S, Zhao J, Yu D, et al. React: Synergizing reasoning and acting in language models[J]. arXiv preprint arXiv:2210.03629, 2022.\\n\\n---\\n\\n> **Q3:** L360: What is the difference between \\\"Success - Non-compliant\\\" and \\\"Success - Compliant\\\"?\\n\\nThanks for your great question.\\n\\n**Success - Compliant:** refers to successfully generating a `submission.csv` file submitted to Kaggle and receiving a reasonable score. \\n\\n**Success - Non-compliant:** refers to cases where the `submission.csv` file is successfully generated but contains issues that result in errors, a score of 0, or abnormal scores upon submission to Kaggle.\", \"the_potential_issues_of_success___non_compliant_include\": [\"**Data scale issues**: For example, the target variable underwent a log transformation during preprocessing but was not reverted to its original scale, causing scale mismatches in the target variable values.\", \"**Data type issues**:\", \"For example, the target variable is expected to be categorical, but the final values are numeric (e.g., 1, 2, 3) instead of the corresponding category names.\", \"For example, the target variable is expected to be an integer, but the final values are floating-point numbers, leading to a score of 0 on Kaggle.\", \"**Value issues:** Missing or duplicate entries in the submission file.\", \"In summary, **Success - Non-compliant** means that while the `submission.csv` file was generated, its content was flawed, leading to submission errors, a score of 0, or abnormal scores on Kaggle.\"]}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [2/2]\", \"comment\": \"> **W3:** **Figure 2 could be improved.** The figure could be split to separate the overall pipeline from details on some of its components. Most importantly, what part is using an LLM, what part is using a human expert ? This figure represents 70% of what the reader is looking for, it should provide first the overall intuition, and then enough details on specific core components that you want to highlight.\\n\\nThanks for your valuable feedback. We apologize for the confusion in our writing and figures. We have revised the main text and adjusted Figure 1 accordingly.\\n\\n1. Throughout the process described in the main text, AutoKaggle operates without any human involvement. In our evaluation, we assessed only the performance of autonomous multi-agents, ensuring no human intervention to maintain the fairness and objectivity of our assessment.\\n2. In Appendix D.5, we have additionally designed a Human-in-the-loop module for the model. As we replied before, we will conduct a user study where independent users will try AutoKaggle in their daily data science scenarios. We designed this Human-in-the-loop module to support our future research endeavors.\\n\\n---\\n\\n> **W4:** **You related work section is actually a background section.** Your current related work covers some domains that are integrated within AutoKaggle. It thus feels more like a related work of your background section (what AutoKaggle builds upon). Is there any *e.g.* AutoML method that you can compare to ? Any method that addresses the same issue ?\\n\\nThanks for your good questions. We have added a discussion of existing work in the Section 4-Related Work (Lines 508-514).\\n\\n---\\n\\n> **Q1:** Did you do any finetuning over the used models, notably LLMs or are you using frozen models ?\\n\\nThanks for your good question. Without any fine-tuning, the agents in AutoKaggle are created directly using OpenAI's official APIs. Section 3.1, Experiment Details, explains the models underpinning the different agents in AutoKaggle. Specifically, the Reader, Reviewer, and Summarizer are based on the GPT-4o-mini model, the Developer is based on the GPT-4o model, and the Summarizer utilizes the GPT-4o/o1-mini model.\\n\\nThanks again for your suggestions and valuable feedback, and I hope our explanation provides you with greater clarity.\"}", "{\"title\": \"Thanks for your valuable comments! Authors' feedback [2/4]\", \"comment\": \"> **W5:** Not much detail on these unit tests. Who writes them? What's the coverage like? Are there any guarantees? If (as I suspect) the \\\"meticulously designed\\\" unit tests are written by humans, then we have a similar situation as with the unspecified human-in-the-loop: the framework is not \\\"fully automated\\\", and it's impossible to rigorously determine how much effect the human hand-holding has on the framework's suggess. This should, at minimum, be clearly, explicitly and boldly acknowledged.\\n\\nThanks for your valuable feedback and this is a good point. We custom-wrote and individually verified the unit tests used in our framework. These unit tests differ from traditional tests focusing on code's individual function definitions. Instead, they are designed to validate the execution results of each stage within the workflow. \\n\\nIf we interpret \\\"coverage\\\" as determining whether these unit tests can confirm that the current workflow stage has no logical errors and achieves its intended goals, the answer is affirmative. Specifically, we reviewed 48 stage results (covering the three stages of Data Cleaning, Feature Engineering, Model-Building, -Validation, and -Prediction across eight competitions: 8 \\u00d7 3 \\u00d7 2 = 48). All stages that passed the unit tests were free of logical errors and successfully achieved their objectives.\\n\\nFor example, in the data cleaning stage, passing unit tests confirms that:\\n\\n- There are no missing values in the data.\\n- No anomalies are present.\\n- There are no duplicate entries or redundant features.\\n- No features were unintentionally added or removed.\\n- The cleaned training and test datasets differ only in the target variable while maintaining consistency in all other features.\\n\\nThis rigorous unit testing ensures the correctness and reliability of each workflow stage.\\n\\nAlthough unit tests are manually constructed by us, they possess general applicability and are not built specifically for any particular task. Instead, they can be generalized to unseen tabular datasets. This level of manual intervention is acceptable and does not undermine the claim of being \\\"fully automated.\\\"\\n\\n---\\n\\n> **W6:** Additionally, it is unclear to me how much of the ML-tools library was developed alongside particular Kaggle Competition attempts. If the tools were developed on a case-by-case basis, to address hurdles found in the challenge, then there is significant data leakage from the evaluation dataset to the framework, leading to overfitting to the competitions chosen during development, and much of the headline 43% comes from tools handcrafted by human developers on a case-by-case basis. For a fair validation of how well this framework performs in \\\"fully automated\\\" mode, the library would need to be \\\"frozen\\\" while the framework was tested on a held-out set of Kaggle Competitions.\\n\\nThanks for your valuable feedback. First, I would like to clarify that these tools were not developed specifically for any Kaggle competitions. Our research focuses on tabular data sets, and these tools are broadly applicable in the data cleaning, feature engineering, and modeling processes. The Tool itself does not involve data leakage; rather, it is a re-packaging of similar functions found in libraries like sklearn, accompanied by detailed documentation to lower the barrier for using these functions in AutoKaggle.\\n\\nFurthermore, we did not perform any special tuning based on the datasets we evaluated. The development of AutoKaggle was based on three toy datasets from reference [1], rather than the Kaggle competition datasets we used for testing. Therefore, we believe there is no risk of data leakage.\\n\\n[1] https://github.com/geekan/MetaGPT/tree/2b160f294936f5b6c29cde63b8e4aa65e9a2ef9f/examples/di\"}", "{\"summary\": \"This paper introduces AutoKaggle, a pipeline to automatically solve Kaggle Competitions. The authors use 5 subparts in a row: a reader, a planner, a developer, a reviewer, and a summarizer. They use LLMs with RAG to develop code-based solutions, with code running, units tests. They evaluate their method on 5 Kaggle competition benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**Interesting problem.** With the LLMs (+RAG) becoming mature, the open source study of their integration into broader tools that can directly be applied to data science tasks, is the natural next step.\\n\\n**Overall good presentation.** Even if some details are lacking to grasp the authors' exact contribution (notably in the figures), the overall presentation clearly demonstrates the problem and the approach set up to tackle it. \\n\\n**Interesting metrics and ablation studies.**\", \"weaknesses\": \"**Lacking evaluation.** The evaluation is lacking comparison to existing AutoML baselines (*e.g. [1]) or explanations on why the authors are not comparing their method to any existing solution. If running such comparison is not possible at all, then the authors should provide explanations on why this is not feasible.\\nWhile detailed reports are provided on their methods and the different components, as this works apply existing techniques, its evaluation is its core contribution. \\nThe authors should report (at least) the standard deviation, but e.g. violin plots to compare AutoKaggle's results of other kaggle competitors could help clearly situate where this automatic pipeline stands.\\n\\n**Evaluation on a (previously) unknown dataset.** It seems that AutoKaggle has been designed to solve these datasets, so one cannot evaluate how much this method would transfer to another, previously unknown dataset.\\nIt would be nice to provide the reader with how much out of the box your method is, maybe with a user study. It seems like its your core contribution, so having independent people trying AutoKaggle and commenting on how easy the setup and interaction is on a left out dataset would help people looking for such solutions.\\n\\n**Figure 2 could be improved.** The figure could be split to separate the overall pipeline from details on some of its components. Most importantly, what part is using an LLM, what part is using a human expert ? This figure represents 70% of what the reader is looking for, it should provide first the overall intuition, and then enough details on specific core components that you want to highlight.\\n\\n**You related work section is actually a background section.**\\nYour current related work covers some domains that are integrated within AutoKaggle. It thus feels more like a related work of your background section (what AutoKaggle builds upon). Is there any *e.g.* AutoML method that you can compare to ? Any method that addresses the same issue ?\\n\\n\\n[1] https://github.com/automl/CAAFE\", \"questions\": [\"Did you do any finetuning over the used models, notably LLMs or are you using frozen models ?\", \"Why cannot you compare to any existing baselines ?\", \"Have you optimized the creation of your pipeline using these 5 kaggle competitions, or have you left out some of them, to evaluate on competitions you did not know at design time ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer zZNf,\\n\\nThank you for your time and effort in reviewing our paper, as well as for your constructive feedback and valuable questions. We sincerely appreciate the thoughtfulness you have brought to the review process.\\n\\nAs the rebuttal period concludes today, we kindly ask if our responses meet your expectations or if further clarifications are needed. If they do address your concerns, we would greatly appreciate your consideration in reevaluating the score. Otherwise, we are happy to provide any additional clarifications within the remaining time.\\n\\nThank you again for your valuable input, which has greatly contributed to improving our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for taking the time and effort to review our manuscript and providing such valuable feedback. As the discussion phase between authors and reviewers is nearing its conclusion, we would like to confirm whether our responses have adequately addressed your concerns.\\n\\nWe provided detailed answers to each of your comments one day ago, and we sincerely hope that our responses have sufficiently clarified your concerns. If you have any remaining doubts or require further clarification, please do not hesitate to let us know. We are more than willing to continue the discussion to address any questions you may have.\\n\\nThank you once again for your time and assistance!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your detailed responses to my points, which are in general valid and satisfying. In particular, many of your clarifications are helpful, and I look forward to seeing them in a future revision.\\n\\nHowever, my main concern is that the workflow relies on human-written unit tests to check the validity of the data at interim stages. While I appreciate the authors' statement that the unit tests \\\"possess general applicability and are not built specifically for any particular task\\\" and as such \\\"they can be generalized to unseen tabular datasets\\\", I remain uncertain about how much the choice of these particular general-purpose unit tests was informed by discoveries during the development phase, and as such how much the 8 chosen Kaggle challenges should comprise a \\\"development dataset\\\", requiring evaluation on a further \\\"validation dataset\\\" of Kaggle challenges which were selected after the set of unit tests was frozen.\\n\\nAs such, I'm afraid I have decided to maintain my score.\"}", "{\"summary\": \"This paper presents AutoKaggle, a multi-agent framework specifically designed to handle the complexities of Kaggle data science competitions. The framework organizes the competition workflow into six distinct phases\\u2014background understanding, exploratory data analysis, data cleaning, in-depth exploratory analysis, feature engineering, and model development and validation\\u2014allowing agents to work systematically through each stage. Key agents, including Reader, Planner, Developer, Reviewer, and Summarizer, collaborate within this structure, with iterative debugging and unit testing to ensure robustness and accuracy in code generation. AutoKaggle integrates a machine learning tools library to streamline tasks, enhance code reliability, and provide users with educational insights through comprehensive reports at each phase. Evaluated across multiple Kaggle competitions, the framework achieved an average completion rate of 83.8% and ranked in the top 42.8% in Kaggle.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"AutoKaggle introduces a tailored phase-based workflow with multi-agent collaboration specifically designed for data science competitions. The system\\u2019s demonstrated high average completion rate and competitive ranking in Kaggle highlight its effectiveness, particularly in tabular classification and regression tasks, showing its strength in handling structured data challenges.\", \"AutoKaggle empowers the Developer agent to perform iterative debugging and unit testing, bolstering the robustness of code generation. Additionally, the integration of a comprehensive machine learning tools library improves the system's efficiency and accuracy, making it better suited for tackling complex Kaggle competitions\"], \"weaknesses\": [\"Limited novelty. While the paper addresses data science problem-solving using LLM-based agents, it lacks a clear description of the specific challenges it intends to solve that existing methods have struggled with. Extending from a single-agent to a multi-agent system is insufficiently justified in this field, as the necessity and performance gains of such an approach are not clearly demonstrated. Existing works, as mentioned in the introduction, have also tackled similar problems with LLM-based agents, questioning the incremental contribution of AutoKaggle.\", \"Multi-agent system design. The multi-agent system, including agents like Reader, Planner, Developer, Reviewer, and Summarizer, is insufficiently explained in terms of its collaborative structure. It is unclear whether these agents operate in an assembly-line fashion or if they engage collectively in each phase under the \\\"Cooperative Engagement\\\" label in Figure 1. Further clarification on their integration and interdependence within each workflow phase is needed.\", \"Role clarity of Planner and Summarizer. Given AutoKaggle\\u2019s sequential, phase-based workflow, the necessity of a Planner agent is ambiguous. Can you quantify the contribution (such as on completion rates or error reduction) of this Planner agent in your system? Similarly, the Summarizer\\u2019s role in contributing to critical performance metrics such as completion rate or Best Normalized Performance Score, is not explicitly justified, leaving its impact on performance uncertain.\", \"Unit Test and Debugging. Dose the Developer agent generate dataset-specific unit tests that align with each unique code snippet or not? How the Developer agent adjusts unit tests based on code variations to ensure logical consistency and accuracy across different tasks?\", \"Lines 275-276 mention the importance of detecting logical errors in code, yet the method for achieving this is underexplored. Can you explain more details about detecting the logical error? More detail is needed on how logical errors are detected and avoided, as conducting exploratory data analysis or statistical checks after data cleaning or feature engineering alone may be insufficient.\", \"Table 2 illustrates the system's performance across different debugging attempts (DT), showing how increased debugging impacts metrics like Completion Rate (CR) and Comprehensive Score (CS). The data indicate that both CR and CS improve as DT rises, reflecting enhanced task completion and accuracy with more debugging opportunities. What the 'performance plateaus' mean in line 524-525?\", \"The paper does not provide information on the cost of running AutoKaggle, which is essential for evaluating its performance and practical applicability. It's benifit to provide cost and total runtime to understand the performance.\", \"The chosen baselines are not entirely convincing. Recent similar works, AIDE[1] and MLE-Agent[2] have shown remarkable capability in Kaggle competition settings. A comparative analysis with these recent works, particularly focusing on AutoKaggle\\u2019s unique advantages in effectiveness, efficiency, or other performance metrics, would highlight its distinct contributions to the field.\", \"A broader evaluation across various task types such as time series prediction, image classification, and text classification, are necessary, as these are critical and challenging categories in Kaggle competitions. The current experiments focus primarily on tabular datasets, leaving it unclear whether AutoKaggle is capable of handling more complex, domain-specific tasks. Can AutoKaggle complete such tasks?\", \"What is the requirement of the LLM? Can AutoKaggle works well with gpt-3.5 or other open-sourced models?\", \"[1] AIDE: the Machine Learning Engineer Agent(https://github.com/WecoAI/aideml)\", \"[2] MLE-Agent: Your intelligent companion for seamless AI engineering and research (https://github.com/MLSysOps/MLE-agent)\"], \"questions\": \"Please refer to the questions in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
09JVxsEZPf
Towards Comprehensive and Efficient Post Safety Alignment of Large Language Models via Safety Patching
[ "Weixiang Zhao", "Yulin Hu", "Zhuojun Li", "Yang Deng", "Yanyan Zhao", "Bing Qin", "Tat-Seng Chua", "Ting Liu" ]
Safety alignment of large language models (LLMs) has been gaining increasing attention. However, current safety-aligned LLMs suffer from the fragile and imbalanced safety mechanisms, which can still be induced to generate unsafe responses, exhibit over-safety by rejecting safe user inputs, and fail to preserve general utility after safety alignment. To this end, we propose a novel post safety alignment (PSA) method to address these inherent and emerging safety challenges, including safety enhancement, over-safety mitigation, and utility preservation. In specific, we introduce \textsc{SafePatching}, a novel framework for comprehensive and efficient PSA, where two distinct safety patches are developed on the harmful data to enhance safety and mitigate over-safety concerns, and then seamlessly integrated into the target LLM backbone without compromising its utility. Extensive experiments on four representative aligned LLMs, including LLaMA-2/3, Gemma and Mistral, show that \textsc{SafePatching} achieves a more comprehensive and efficient PSA than baseline methods. It even enhances the utility of the backbone, further optimizing the balance between being helpful and harmless in current aligned LLMs. Also, \textsc{SafePatching} demonstrates its superiority in continual PSA scenarios. \textcolor{red}{WARNING: This paper may contain content that is offensive and harmful.}
[ "Post Safety Alignment", "Large Language Models", "Jailbreak Defense", "Over-Safety Mitigation" ]
https://openreview.net/pdf?id=09JVxsEZPf
https://openreview.net/forum?id=09JVxsEZPf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z6NZ25wBQ7", "sfKzkDR8gJ", "q5QEteD5l9", "oTiKeA9WGm", "iRxf9zmEuE", "fV6YmI894i", "fSuEx9eqrD", "dvg8fkxJfU", "cCizDCDG5n", "bv5OKB3vfa", "bnBnWLByYz", "b0jmuRhQoN", "RFruMYSGYp", "QvoOEOvRTg", "MujhOTlGz7", "H75mq6RCti", "Am17ogKR00", "4rZ8tS2aW4", "3Y4le7W0NC", "1lqp7HlPjL" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732375246709, 1732021224480, 1732021003988, 1730685243694, 1733121686669, 1732509909457, 1730718171054, 1733815822878, 1732021134520, 1732021308125, 1733121638256, 1732789345745, 1730881565752, 1730116144451, 1733121731069, 1732586523098, 1732021185598, 1732509857564, 1732021058060, 1732021265899 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6315/Reviewer_Kpns" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Reviewer_c4UP" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Reviewer_mxyn" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Reviewer_pyqz" ], [ "ICLR.cc/2025/Conference/Submission6315/Reviewer_1JaV" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ], [ "ICLR.cc/2025/Conference/Submission6315/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper investigates how to balance safety enhancement and over-safety mitigation while retaining model utility during post-safety alignment. To achieve this, the authors propose the SAFEPATCHING framework, which optimizes separate patches for safety enhancement and over-safety mitigation. Through controllable patching, these two patches are selectively merged at the parameter level, addressing conflicts between them while preserving model utility. The paper includes a comprehensive evaluation across three dimensions\\u2014safety enhancement, over-safety mitigation, and utility preservation\\u2014demonstrating the effectiveness of SAFEPATCHING. Additionally, it provides a deeper insight by analyzing parameter selection, the distribution of each patch's parameters, and other aspects.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is well-structured and easy to follow.\", \"The motivation of the paper is clear, and the novel method proposed aligns well with this motivation.\", \"The experiments are thorough, providing a comprehensive evaluation across the three objectives on various baseline models and methods. Additionally, detailed analyses are conducted throughout.\"], \"weaknesses\": [\"The **experimental setting** section seems to overlook the choice of hyperparameters. It appears that key hyperparameters like top rate, scale weight, etc., are only mentioned in the appendix, with no indication in the main text of how these crucial settings were determined for the primary experiments (if I missed this, please let me know).\", \"**Minor suggestions and areas for improvement** (though not sufficient reasons for rejection): The experimental section is somewhat dense, especially section 5.2. Breaking it down into more subsections or adjusting the layout could enhance readability. Important hyperparameters like top rate, scale weight, and retention rate should ideally be summarized in a table or have a representative results diagram in the main text rather than placing all analysis in the appendix.\"], \"questions\": [\"The results for top rate were somewhat surprising, as the model appears overly robust to variations in top rate. A small question arises here: could this be due to the narrow range of top rate choices? Expanding the range to include smaller or larger top rates might yield more insights.\", \"**Figure 3 Insight**: The distribution of overly-safety and safety parameters across Transformer layers shown in Figure 3 is very intriguing, especially with one concentrating in the middle layers and the other in the lower layers. Could you provide any insights or explanations for this phenomenon?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors (2/2)\", \"comment\": \"> Weakness 2.2 and Question 2: The authors did not specify how they fine-tuned the Longformer-based judger. What impact might this setup have on the accuracy and reliability of the judgment model in this experiment?\\n\\nWe apologize for any ambiguity. In our experiment, we **directly adopt** the Longformer model provided by Wang et al. **without any further finetuning**. \\n\\nTo validate the accuracy and reliability of the judgment model used in our experiments, we sample 50 instances from AdvBench and compare its classification performance with GPT-4o, where human judgements serve as the golden reference. This comparison confirms the Longformer-based model\\u2019s effectiveness in our specific setting.\\n\\n|Model|Accuracy|\\n|---------------------|----------|\\n|Longformer|96.70%|\\n|GPT-4o|97.80%|\\n\\n> Regarding Ethics Concern: Potential for Misuse and Safety Bypass\\n\\nThank you for raising this important concern. As noted in Appendix I, we have discussed the potential risks associated with SafePatching. While the framework is designed to balance safety and usability by allowing benign responses to sensitive keywords, we recognize the potential risk of misuse by bad actors.\\n\\nTo mitigate this, we:\\n- Emphasize research-only usage, ensuring the framework is used solely for advancing safety research.\\n- Conduct extensive evaluations (e.g., on Beavertails and AdvBench), demonstrating SafePatching improves safety without introducing significant bypass vulnerabilities.\\n- Advocate for restricted access and transparency, ensuring controlled deployment and community oversight.\\n\\nWe will further expand on these risks in the revised manuscript and plan to integrate dynamic adversarial testing to strengthen safety.\\n\\n---\\n\\nWe once again sincerely appreciate your comprehensive and valuable suggestions, which are crucial for enhancing the quality of our paper. We hope our response can alleviate concerns you may have regarding our paper and look forward to further communication with you.\"}", "{\"title\": \"Rebuttal by Authors (2/2)\", \"comment\": \"> Weakness 2.3: This highlights that there is more work to be done in effectively controlling the balance between safety enhancement and over-safety mitigation than the approach in its current state.\\n\\nWe appreciate your insights and agree that balancing these objectives is a challenging but crucial aspect of PSA. \\n\\n- Through our experiments, we first identity this crucial challenge and demonstrate the **clear limitations of existing PSA methods in this regard**. \\n\\n- As detailed above, SafePatching is currently **the most effective and efficient method** for simultaneously achieving comprehensive PSA, representing a **pioneering exploration in this area**.\\n\\n- While we totally agree with you that there is still room for further improvement, we hope this work inspires the community to continue advancing the field.\\n\\n---\\n\\n> Question 1: How does the use of gradient ascent and descent for patch derivation differ from recent work in unlearning?\\n\\nThank you for your question. We directly adopt gradient ascent and descent techniques from recent work in unlearning. However, they are **only a small component of SafePatching**, specifically used to generate the safety and over-safety patches. And proposing new unlearning or fine-tuning techniques lies outside the scope of our study. Instead, the focus of SafePatching is to provide **an efficient `(results in Table 1)` and effective `(results in Table 2 and 3)` solution** for achieving the three PSA objectives\\u2014safety enhancement, over-safety mitigation, and utility preservation\\u2014simultaneously.\\n\\nWe believe the interesting part in gradient ascent and descent is that we leverage **a single harmful dataset to generate both types of patches**, thereby avoiding additional data overhead. This efficient use of resources may enhance the framework\\u2019s practical utility in real-world applications.\\n\\n---\\n\\nWe once again sincerely appreciate your comprehensive and valuable suggestions, which are crucial for enhancing the quality of our paper. We hope our response can alleviate concerns you may have regarding our paper and look forward to further communication with you.\"}", "{\"summary\": \"This paper proposes a novel post-safety alignment (PSA) method, called SAFEPATCHING, which aims to address safety, over-safety, and utility issues in large language models (LLMs). In this paper, the authors develop a two-stage PSA framework, which applies distinct safety patches to the backbone LLM based on harmful data to improve safety and reduce over-safety, meanwhile, maintaining the utility capability of the LLM. The experiment shows that SAFEPATCHING achieves more effective and efficient PSA compared to baseline methods across four aligned LLMs\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper proposes a new method named SAFEPATCHING to address the limitations of existing methods on post-safety alignment for LLMs, such as over-safety issues and high cost.\", \"The paper presents experimental results and comparisons with state-of-the-art methods to demonstrate the effectiveness of SAFEPATCHING and uses multiple open-source datasets on safety, over-safety, and utility for a comprehensive evaluation. Besides, this paper has interesting findings on the distribution of the most important parameters for safety and over-safety, providing future research directions for the community.\"], \"weaknesses\": [\"Lack of justification in Sec. 3.3 controllable patching. The authors may want to highlight the novelty of their tool and the rigor of their method. Currently, it appears that the approach relies on the SNIP score proposed by Lee et al., as well as model merging methods by Yu et al. and Hui et al., without a thorough explanation of the unique contributions or advancements made in this work.\", \"Although the authors conducted an excessive experiment to show the effectiveness of SAFEPATCHING, several concerns existed in the settings.\", \"The study evaluates SAFEPATCHING using only a single harmful dataset, AdvBench, which may not adequately demonstrate the method's transferability across different safety scenarios. Given the extensive range of safety categories and perspectives, it's essential to assess whether a backbone LLM patched using AdvBench can maintain its effectiveness on other datasets representing diverse types of harmful content.\", \"The authors did not specify how they fine-tuned the Longformer-based judger. Wang et al. used annotated data generated through human labor to fine-tune their Longformer model. It remains unclear whether the fine-tuned model from Wang et al.'s work was directly utilized in this experiment or if further adjustments were made. Clarification on this point would provide a better understanding of the model\\u2019s setup and any adaptations relevant to this study.\", \"Yu, Le, et al. \\\"Language models are super mario: Absorbing abilities from homologous models as a free lunch.\\\" Forty-first International Conference on Machine Learning. 2024.\", \"Hui, Tingfeng, et al. \\\"HFT: Half Fine-Tuning for Large Language Models.\\\"\", \"Wang, Yuxia, et al. \\\"Do-not-answer: A dataset for evaluating safeguards in llms.\\\"\"], \"questions\": [\"Given that only the AdvBench dataset was used to evaluate SAFEPATCHING, how does the method perform across other safety-related datasets? Could testing with a broader range of harmful data enhance our understanding of its transferability to diverse safety scenarios?\", \"Since the authors did not specify whether they directly used the fine-tuned Longformer model from Wang et al. or performed additional fine-tuning, what impact might this setup have on the accuracy and reliability of the judgment model in this experiment?\", \"Could a deeper explanation of these aspects clarify the novelty and rigor of the proposed approach in Section 3.3?\"], \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns', 'Yes, Privacy, security and safety']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Kind Reminder to Reviewer c4UP\", \"comment\": \"Dear Reviewer c4UP,\\n\\nCould you please let us know if our responses satisfactorily address the issues you raised? We would greatly appreciate any further suggestions or clarifications you may have and are happy to discuss them further if needed.\\n\\nThank you again for your time and consideration.\"}", "{\"title\": \"Rebuttal by Authors (2/2)\", \"comment\": \"> Question 2: Could you provide any insights or explanations for the phenomenon in Figure 3?\\n\\nThank you for highlighting this interesting observation. Regrettably, our current conclusions are empirical, and we cannot yet provide definitive explanations. However, our findings are consistent with recent work in the field, which offers some related insights:\\n\\n- Over-Safety Parameters: These parameters are identified through gradient descent on unsafe data, representing the model\\u2019s unsafe parameter set. A recent study [1] also found that unsafe parameters are predominantly located in the lower layers of the model. Their experiments showed that adding the representations of lower layers to the model\\u2019s final output significantly weakens its safety performance.\\n\\n- Safety Parameters: Similarly, recent research [2] has observed that parameters associated with safety performance tend to concentrate in the middle layers of Transformer models. This aligns closely with our empirical findings.\\n\\nWe appreciate your question and agree that this is a promising direction for future exploration. To that end, we plan to investigate this phenomenon further using Tulu 3 [3], a model that was open-sourced just two days ago and is the first to provide full transparency in its safety post-training process. This transparency offers a unique opportunity to deepen our understanding of how safety mechanisms are formed within models.\\n\\nThank you again for your insightful question and for motivating us to pursue this exciting research avenue!\", \"reference\": \"[1] Ghandeharioun A, Yuan A, Guerard M, et al. Who's asking? User personas and the mechanics of latent misalignment[J]. arXiv preprint arXiv:2406.12094, 2024.\\n\\n[2] Li S, Yao L, Zhang L, et al. Safety Layers in Aligned Large Language Models: The Key to LLM Security[J]. arXiv preprint arXiv:2408.17003, 2024.\\n\\n[3] Lambert N, Morrison J, Pyatkin V, et al. T\\u00dcLU 3: Pushing Frontiers in Open Language Model Post-Training[J].\\n\\n---\\n\\nWe once again sincerely appreciate your comprehensive and valuable suggestions, which are crucial for enhancing the quality of our paper. We hope our response can alleviate concerns you may have regarding our paper and look forward to further communication with you.\"}", "{\"summary\": \"This paper proposes a post safety alignment method which merges two models post-trained on harmful data with gradient ascent and descent respectively. The post-trained and merged model preserves a balance on safety, over-safety mitigation, and utility preservation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"\\u2022\\tThe idea is straightforward.\\n\\u2022\\tThe experiments are extensive.\", \"weaknesses\": \"\\u2022\\tThe paper lacks the comparision of external safeguards methods such as OpenChatKit and NeMo guardrails that are known to handle over-safety issues. Would these external safeguards methods also achieve the three objectives proposed in the paper?\\n\\u2022\\tThere are a few hyperparameters in equation 7&8, such as a, b, \\\\alpha, \\\\beta. How you set these parameters? In Table 3, merging methods like the task arithmetic and TIES-merging do not have big differences compared to the intersect patch. Would the benefit comes from your hyperparameter selection?\", \"questions\": \"Would you please address the concerns in weakness?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you for your constructive feedback and we address your concerns as follows.\\n\\n---\\n\\n> Weakness 1: Would these external safeguards methods, OpenChatKit and NeMo, also achieve the three objectives proposed in the paper?\\n\\nThank you for your valuable suggestion to compare our method with external safeguard methods like OpenChatKit and NeMo guardrails. The **results on four backbones, shown in the tables below**, indicate that while these external methods meet the objective of utility preservation, they could not solve the conflict between safety and over-safety to achieve comprehensive PSA. By contrast, our SafePatching still demonstrates strong advantages.\\n\\n|**LLaMA-2-7B-Chat**|Seen|Unseen|XSTest|OKTest|MT-Bench|\\n|--|--|--|--|--|--|\\n|Original|24.00|21.00|8.00|4.67|6.01|\\n|NeMo|**6.00**|8.50|74.00|86.33|5.78|\\n|OpenChatKit|24.00|21.00|10.00|6.00|6.04|\\n|SafePatching|7.25|**7.50**|**3.33**|**2.33**|**6.14**|\\n\\n|**LLaMA-3-8B-Instruct**|Seen|Unseen|XSTest|OKTest|MT-Bench|\\n|--|--|--|--|--|--|\\n|Original|12.50|18.50|2.80|9.67|8.20|\\n|NeMo|**1.00**|**1.00**|15.20|14.67|8.11|\\n|OpenChatKit|12.50|18.50|4.40|11.00|8.17|\\n|SafePatching|4.75|13.50|**1.60**|**6.33**|**8.18**|\\n\\n|Gemma-1.1-7B-it|Seen|Unseen|XSTest|OKTest|MT-Bench|\\n|--|--|--|--|--|--|\\n|Original|24.50|34.00|28.80|23.33|6.82|\\n|NeMo|**1.50**|**1.00**|36.00|30.00|6.74|\\n|OpenChatKit|24.50|34.00|28.80|23.33|6.83|\\n|SafePatching|1.75|15.00|**16.00**|**14.67**|**6.94**|\\n\\n|Mistral-7B-Instruct-v0.1|Seen|Unseen|XSTest|OKTest|MT-Bench|\\n|--|--|--|--|--|--|\\n|Original|75.00|69.5|14.00|6.33|6.49|\\n|NeMo|26.25|25.50|21.00|6.33|**6.45**|\\n|OpenChatKit|75.00|69.50|15.00|6.33|6.43|\\n|SafePatching|**6.75**|**15.50**|**5.20**|**3.33**|6.38|\\n\\n---\\n\\n> Weakness 2: There are a few hyperparameters in equation 7&8, such as a, b, \\\\alpha, \\\\beta. How you set these parameters?\\n\\nWe would like to clarify that, in fact, **we have provided** a detailed analysis of the robustness and selection process for these hyperparameters including overall retention rate $p$, top rate $a$ and $b$, and scale weight $\\\\alpha$ and $\\\\beta$ `(lines 513 - 516)`. Experimental results on four backbone models (LLaMA-2, LLaMA-3, Gemma and Mistral) are shown `in Figures 5 through 9 in Appendix G.3`, demonstrating the impact of different settings and the rationale behind our final choices.\", \"we_can_draw_two_conclusions_from_the_hyper_parameter_analysis\": \"- Random retention rate $p$ and the top rate $a$ and $b$ are robust across different backbones.\\n- The only model-specific hyper-parameter we need to adjust is the scale weight $\\\\alpha$ and $\\\\beta$, and:\\n - Larger $\\\\alpha$ and $\\\\beta$ can negatively impact overall model performance and lead to meaningless responses, potentially leading to a loss of utility (especially on Mistral and Gemma).\\n - Within an appropriate value range, $\\\\alpha$ and $\\\\beta$ further balance the conflict between safety and over-safety: the larger $\\\\alpha$, the more pronounced the effect of the Safety Patch, enhancing safety performance but increasing over-safety. Conversely, a larger $\\\\beta$ has the opposite effect.\\n\\n---\\n\\n> Weakness 3: Would the benefit in arithmetic and TIES-merging comes from your hyperparameter selection?\\n\\nWe would like to clarify that the results for Arithmetic and TIES-merging **have already been obtained after optimal hyperparameter tuning for each method**. For further clarity, we report the hyperparameter tuning process for them in terms of $\\\\alpha$ and $\\\\beta$, similar to that process used for our method, in the table below. Best results (reported in current paper) are in bold and \\u2018/\\u2019 means the ability of resulting model is largely damaged to get meaningful outputs.\\n\\n|TaskArithmeticetic|Seen|Unseen|XSTest|OKTTest|AVG.|MT-Bench|\\n|--|--|--|--|--|--|--|\\n|(1.0,0.05)|/|/|/|/|/|/|\\n|(1.0,0.1)|14.00|12.00|12.00|15.00|39.96|5.99|\\n|**(1.0,0.2)**|**7.50**|**7.50**|**7.33**|**8.40**|**40.34**|**6.08**|\\n|(1.0,0.5)|26.75|25.00|3.20|6.33|40.21|6.03|\\n|(1.0,1.0)|56.25|57.75|1.60|1.33|39.99|5.93|\\n|(1.0,1.5)|85.25|87|0.00|0.00|39.58|5.94|\\n\\n|TIES-Merge|Seen|Unseen|XSTest|OKTTest|AVG.|MT-Bench|\\n|--|--|--|--|--|--|--|\\n|(1.0,0.05)|/|/|/|/|/|/|\\n|(1.0,0.1)|4.50|3.00|18.40|24.67|39.77|5.96|\\n|**(1.0,0.2)**|**12.00**|**11.00**|**7.67**|**6.00**|**40.43**|**6.09**|\\n|(1.0,0.5)|25.75|27.50|3.20|5.00|40.21|6.03|\\n|(1.0,1.0)|61.25|65.50|1.20|1.33|39.99|5.93|\\n|(1.0,1.5)|85.00|85.50|0.00|0.00|39.58|5.94|\\n\\nThese results demonstrate that our SafePatching still maintains an advantage in performance across various scale weights used for Arithmetic and TIES-merging, highlighting its robustness in achieving a balanced trade-off between safety, over-safety mitigation, and utility preservation.\\n\\n---\\n\\nWe once again sincerely appreciate your comprehensive and valuable suggestions, which are crucial for enhancing the quality of our paper. We hope our response can alleviate concerns you may have regarding our paper and look forward to further communication with you.\"}", "{\"title\": \"Rebuttal by Authors (2/2)\", \"comment\": \"> Weakness 3: Stability of SafePatching Approach. SafePatching's dual-patch integration requires careful parameter tuning and may lack robustness or generalizability across different architectures or types of prompts.\\n\\nWe would like to clarify that, in fact, we **have provided a detailed analysis of the robustness and stability for all hyperparameters** of SafePatching including overall retention rate $p$, top rate $a$ and $b$, and scale weight $\\\\alpha$ and $\\\\beta$ `(lines 513 - 516)`. Experimental results and detailed analysis on four backbone models (LLaMA-2, LLaMA-3, Gemma and Mistral) are shown `in Figures 5 through 9 in Appendix G.3`.\", \"we_can_draw_two_conclusions_from_the_hyper_parameter_analysis\": \"- Random retention rate $p$ and the top rate $a$ and $b$ are **robust across different backbones**.\\n- The only model-specific hyper-parameter we need to adjust is the scale weight $\\\\alpha$ and $\\\\beta$. This process involves merely assignment operations in storage space, making it **very quick and efficient**. It can be completed **without involving any GPU computations**.\\n\\nRegarding the generalizability across different types of prompts, we have evaluated SafePatching under Seen and Unseen harmful prompts from AdvBench `(Table 2)`, and different harmful categories from Beavertails dataset [1] under continual post-safety alignment `(Table 4 and Figure 4)`. Moreover, we supplement our experiments with additional evaluations on the Beavertails dataset, which introduces 14 different categories of harmful prompts.\\n\\n|LLaMA-2-7B-Chat|Beavertails|XSTest|OKTest|AVG.|MT-Bench|\\n|--|--|--|--|--|--|\\n|Original|19.64|8.00|4.67|40.35|6.01|\\n|GA|3.65|35.20|38.67|39.02|4.01|\\n|GA+Mismatch|3.98|22.40|25.33|38.42|4.56|\\n|RESTA|5.39|66.33|41.60|39.39|5.49|\\n|NPO|17.07|18.80|18.67|39.26|5.62|\\n|SafeDecoding|**0.01**|80.80|59.67|/|5.72|\\n|Self-CD|0.03|22.80|41.67|/|3.98|\\n|ROSE|0.02|43.20|40.33|/|4.14|\\n|SafePatching|7.87|**3.33**|**2.33**|**40.42**|**6.14**|\\n\\nThese additional experiments demonstrate SafePatching\\u2019s ability to generalize across varied safety scenarios, reinforcing the method\\u2019s transferability and robustness.\", \"reference\": \"[1] Ji J, Liu M, Dai J, et al. Beavertails: Towards improved safety alignment of llm via a human-preference dataset[J]. Advances in Neural Information Processing Systems, 2023, 36.\\n\\n---\\n\\n> Question 1: Could you please clarify and elaborate how they are implemented given a harmful dataset?\\n\\n**Gradient Ascent for the Safety Patch**\\n\\nThe goal of the safety patch is to **reduce the likelihood of generating harmful outputs**. To achieve this, we perform gradient ascent on the loss associated with harmful responses. Specifically:\\n\\n- Objective: **Maximize the loss on harmful input-output pairs**. This effectively \\u201cteaches\\u201d the model to avoid or forget producing unsafe content by penalizing its unsafe responses.\\n\\n- Implementation: Please kindly refer to our code in supplementary files in the folder `src/GA/src/model/unlearned_model.py in line 14` to negate the sign of the loss.\\n\\n**Gradient Descent for the Over-Safety Patch**\\n\\nThe over-safety patch ensures the model does not become overly cautious, which could lead to false refusals. Specifically:\\n\\n- Objective: **Minimize the loss on the same harmful input-output pairs**. This allows to remove the backbone\\u2019s internal safety defenses and enable it to respond freely to any input prompt.\\n\\n- Implementation: This is same with the standard SFT loss without any change.\\n\\n> Regarding Ethics Concern: Potential for Misuse and Safety Bypass\\n\\nThank you for raising this important concern. As noted in Appendix I, we have discussed the potential risks associated with SafePatching. While the framework is designed to balance safety and usability by allowing benign responses to sensitive keywords, we recognize the potential risk of misuse by bad actors.\\n\\nTo mitigate this, we:\\n- Emphasize research-only usage, ensuring the framework is used solely for advancing safety research.\\n- Conduct extensive evaluations (e.g., on Beavertails and AdvBench), demonstrating SafePatching improves safety without introducing significant bypass vulnerabilities.\\n- Advocate for restricted access and transparency, ensuring controlled deployment and community oversight.\\n\\nWe will further expand on these risks in the revised manuscript and plan to integrate dynamic adversarial testing to strengthen safety.\\n\\n---\\n\\nWe once again sincerely appreciate your comprehensive and valuable suggestions, which are crucial for enhancing the quality of our paper. We hope our response can alleviate concerns you may have regarding our paper and look forward to further communication with you.\"}", "{\"title\": \"Kind Reminder to Reviewer mxyn\", \"comment\": \"Dear Reviewer mxyn,\\n\\nCould you please let us know if our responses satisfactorily address the issues you raised? We would greatly appreciate any further suggestions or clarifications you may have and are happy to discuss them further if needed.\\n\\nThank you again for your time and consideration.\"}", "{\"title\": \"Kind Reminder to Reviewer Kpns\", \"comment\": \"Dear Reviewer Kpns,\\n\\nCould you please let us know if our responses satisfactorily address the issues you raised? We would greatly appreciate any further suggestions or clarifications you may have and are happy to discuss them further if needed.\\n\\nThank you again for your time and consideration.\"}", "{\"summary\": \"The paper presents a method called SafePatching for post safety alignment (PSA) of large language models (LLMs). The authors claim that SafePatching addresses three PSA objectives: safety enhancement, over-safety mitigation, and utility preservation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The problem addressed\\u2014post-hoc safety alignment\\u2014is important for ensuring that LLMs behave safely in real-world applications.\\n2. The empirical evaluation and ablations are fairly comprehensive across different LLM backbones and benchmarks.\\n3. The method shows some promise in balancing safety with utility preservation compared to existing baselines.\", \"weaknesses\": \"1. The proposed approach seems to be largely composed of a series of straightforward adaptations or incremental improvements on recent work. For instance, the use of gradient ascent and descent techniques for deriving safety and over-safety patches is largely an adaptation of existing machine unlearning methods described in the paper, rather than a truly novel contribution. The concept of patching the difference set of important parameters between safety and over-safety patches is perhaps the most novel aspect. However, it's still a relatively straightforward extension of existing ideas in parameter importance and model merging.\\n2. While the proposed approach does demonstrate that it is the only one to improve safety, over-safety, and utility over the backbone, in many cases, it performs significantly worse than the baselines for a particular safety or over-safety benchmark. Moreover, the safety and over-safety improvements over the backbone model are quite marginal in some cases. This highlights that there is more work to be done in effectively controlling the balance between safety enhancement and over-safety mitigation than the approach in its current state.\", \"questions\": \"1. How does the use of gradient ascent and descent for patch derivation differ from recent work in unlearning?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a SafePatching framework to improve the safety of large language models (LLMs) while maintaining their utility. The major contribution is two types of safety patches.\\n- The safety enhancement patch utilizes gradient ascent on harmful data to train the model to avoid generating unsafe responses. It effectively helps the model \\\"unlearn\\\" unsafe behaviors by adjusting the model parameters to minimize the risk of producing harmful content. \\n- The over-safety mitigation patch, developed through gradient descent, is designed to prevent the model from being overly cautious. It fine-tunes the model to ensure it does not overly restrict or reject benign inputs that might superficially appear sensitive or risky. \\n\\n The approach is tested across multiple LLMs, showing better performance in reducing harmful outputs, handling over-safety, and preserving utility compared to several existing methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed approach is easy to understand, logical, and appears to be effective.\", \"It addresses a significant and timely problem.\", \"The paper is overall well-written.\", \"The unlearning and fine-tuning techniques used in SafePatch are not new, the originality comes from considering dual patching at the same time.\", \"The paper includes an extensive set of experiments.\"], \"weaknesses\": \"- Limited Novelty in Core Techniques\\n\\nWhile the dual-patching approach is innovative in combining safety enhancement with over-safety mitigation, the core methods (e.g., gradient ascent and descent on harmful data) rely heavily on existing unlearning and fine-tuning techniques.\\n\\n- Clarity on Practical Deployment\\n\\nThe paper would benefit from more actionable details regarding the real-world deployment of SafePatching, especially the requirements on the harmful data set.\\n\\n- Stability of SafePatching Approach\\n\\nSafePatching's dual-patch integration requires careful parameter tuning, especially with the two gradient-based patches potentially introducing conflicts within the model. The process of managing these interactions, although effective, may lack robustness or generalizability across different architectures or types of prompts.\", \"questions\": \"In the SafePatching framework, Eq (1) and Eq (2) are designed to achieve two opposing objective by applying gradient-based updates in opposite directions on the same harmful dataset. Could you please clarify and elaborate how they are implemented given a harmful dataset?\\n\\n\\nIn SafePatching, what requirements should a harmful dataset fulfill? For example, are there specific expectations concerning its size, diversity, or other characteristics? Additionally, are these requirements realistic for SafePatching's application in real-world scenarios?\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety', 'Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"- Potential for Misuse and Safety Bypass\\n\\nThe SafePatching framework\\u2019s dual-patch approach is designed to mitigate over-safety, allowing the model to respond to benign prompts with sensitive keywords. However, this opens up a risk of misuse if bad actors attempt to exploit this flexibility to bypass safety mechanisms deliberately.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Kind Reminder to Reviewer 1JaV\", \"comment\": \"Dear Reviewer 1JaV,\\n\\nCould you please let us know if our responses satisfactorily address the issues you raised? We would greatly appreciate any further suggestions or clarifications you may have and are happy to discuss them further if needed.\\n\\nThank you again for your time and consideration.\"}", "{\"title\": \"Kind Reminder to All Reviewers\", \"comment\": \"Dear All Reviewers,\\n\\nCould you please let us know if our responses satisfactorily address the issues you raised? We would greatly appreciate any further suggestions or clarifications you may have and are happy to discuss them further if needed.\\n\\nThank you again for your time and consideration.\"}", "{\"title\": \"Rebuttal by Authors (1/2)\", \"comment\": \"We appreciate your positive feedback on our experimental results, comparisons with state-of-the-art methods, and insights into parameter distributions. We would address each of the concerns in detail:\\n\\n---\\n\\n> Weakness 1 and Question 3: Could a deeper explanation of these aspects clarify the novelty and rigor of the proposed approach in Section 3.3?\\n\\nOur SafePatching introduces a novel approach by implementing a controllable dual-patching process that effectively `(results in Table 2 and 3)` and efficiently `(results in Table 1)` achieves the three primary objectives of post-safety alignment (PSA): enhancing safety, reducing over-safety, and maintaining utility.\\n\\n- Regarding SNIP: In previous works, SNIP has been primarily used to analyze models\\u2019 safety-related behaviors. By contrast, we employ SNIP to actively enhance post-alignment performance, extending its original application to a new context that focuses on achieving balanced PSA goals.\\n\\n- Regarding model merging methods by Yu et al. and Hui et al.: Rather than directly adopting their methods, we drew inspiration from their discussions on **parameter sparsification**. This guided us in designing our approach.\\n\\nBuilding on these ideas, we further propose **patching within the difference set of the most important parameter regions for each patch**, reducing conflicts between the safety and over-safety patches. As you noted, our analysis in Figure 3 provides interesting insights into the distribution of these key parameters, highlighting future research directions for the community.\\n\\nAlso, our **experimental results in `Table 3` against state-of-the-art model merging methods** also demonstrate the effectiveness of our controllable dual-patching proposed in Section 3.3. \\n\\n---\\n\\n> Weakness 2.1 and Question 1: how does the method perform across other safety-related datasets? Could testing with a broader range of harmful data enhance our understanding of its transferability to diverse safety scenarios?\\n\\nWe agree that validating our method across diverse safety scenarios would strengthen our results.\\n\\nAnd we would like to clarity that we have evaluated SafePatching on 3 different harmful categories from Beavertails dataset [1] under continual post-safety alignment `(lines 520 \\u2013 528, Table 4 and Figure 4 in Section 5.3)`.\\n\\nTo further address this, we supplement our experiments with additional evaluations on the Beavertails dataset, which introduces 14 different categories and perspectives of harmful content.\\n\\n|LLaMA-2-7B-Chat|Beavertails|XSTest|OKTest|AVG.|MT-Bench|\\n|--|--|--|--|--|--|\\n|Original|19.64|8.00|4.67|40.35|6.01|\\n|GA|3.65|35.20|38.67|39.02|4.01|\\n|GA+Mismatch|3.98|22.40|25.33|38.42|4.56|\\n|RESTA|5.39|66.33|41.60|39.39|5.49|\\n|NPO|17.07|18.80|18.67|39.26|5.62|\\n|SafeDecoding|**0.01**|80.80|59.67|/|5.72|\\n|Self-CD|0.03|22.80|41.67|/|3.98|\\n|ROSE|0.02|43.20|40.33|/|4.14|\\n|SafePatching|7.87|**3.33**|**2.33**|**40.42**|**6.14**|\\n\\nThese additional experiments demonstrate SafePatching\\u2019s ability to generalize across varied safety scenarios, reinforcing the method\\u2019s transferability and robustness beyond AdvBench alone. We will include these results in the revised paper to offer a more comprehensive evaluation of SafePatching\\u2019s effectiveness across multiple safety domains.\", \"reference\": \"[1] Ji J, Liu M, Dai J, et al. Beavertails: Towards improved safety alignment of llm via a human-preference dataset[J]. Advances in Neural Information Processing Systems, 2023, 36.\"}", "{\"title\": \"Rebuttal by Authors (1/2)\", \"comment\": \"We thank you for your thoughtful and constructive feedback on our paper. We appreciate your recognition of the novelty and clarity of our method, as well as the comprehensive evaluation we conducted. Below, we address each of the weaknesses you raised.\\n\\n---\\n\\n> Weakness 1: The experimental setting section seems to overlook the choice of hyperparameters.\\n\\nThank you for pointing out this potential oversight. In the initial version of the paper, we included the hyperparameter settings for our main experiments in `Table 6 (page 18 in the Appendix)`. However, we now realize that this placement might lead to readers overlooking this critical information. Based on your suggestion, we have **moved these details to `Table 1 (page 7 in the Main Text)` in the revised version** to ensure better accessibility and clarity.\\n\\nWe greatly appreciate your feedback in helping us improve the presentation of our work.\\n\\n---\\n\\n> Weakness 2: The experimental section is somewhat dense, especially section 5.2.\\n\\nThank you for your valuable feedback. We have revised the experimental section in the manuscript following your suggestions. Changes are highlighted in **orange font** for clarity. Specifically:\\n\\n- We have moved the main hyperparameter settings from the appendix to **Table 1 on page 7** in the main text.\\n\\n- The primary experimental results on LLaMA-3 and training/inference efficiency analysis have been relocated to the appendix to streamline the main text.\\n\\n- Section 5.2 has been divided into three subsections: **Ablation Study**, **Distribution of Important Parameters**, and **Robustness Analysis on Hyper-Parameters**. Additionally, we modified the first sentence of each paragraph to clearly convey the main conclusion of the respective analysis.\\n\\nThese changes aim to improve the readability and organization of the experimental section, ensuring that readers can navigate the content more effectively. We sincerely appreciate your feedback, which has greatly contributed to enhancing the presentation of our work.\\n\\n---\\n\\n> Question 1: could the robustness of top rate be due to the narrow range of top rate choices? Expanding the range to include smaller or larger top rates might yield more insights.\\n\\nThank you for your insightful question. We have conducted additional experiments to investigate the impact of both smaller and larger top rates on LLaMA-2-7B-Chat (top rate values constrained within 30%, as it cannot exceed the overall retention rate $p$).\\n\\n|Top Rate (%)|Seen|Unseen|XSTest|OKTest|AVG.|MT-Bench|\\n|-|-|-|-|-|-|-|\\n|(0.1, 0.1)|15|16|8.00|4.67|40.23|6.02|\\n|(0. 5, 0. 5)|10|10|6.40|4.33|40.37|6.01|\\n|(10, 10)|7.25|8|4.00|2.33|40.03|5.98|\\n|(15, 15)|7.75|8.25|4.40|3.33|39.79|5.91|\\n|(20, 20)|8.25|9.25|5.20|4.33|39.72|5.90|\\n|(25, 25)|8.25|8.75|5.20|3.33|39.88|5.93|\\n\\nThe results show that while the model remains robust across all three evaluation dimensions with larger top rates, smaller top rates lead to performance degradation. This can be explained as follows:\\n\\n- Larger top rates (e.g., 25%): Even with higher top rates, the parameters in the difference set account for only about 10% of the total parameters, maintaining a **highly sparse** update. This aligns with findings in recent studies showing that even a small fraction of parameters can have a significant impact on large model performance [1,2].\\n\\n- Smaller top rates: With smaller top rates, the difference set **contains almost no parameters** from either the Safety or Over-Safety Patch, making it difficult to leverage their respective contributions effectively.\\n\\nWe appreciate your suggestion, as it has helped us provide a more comprehensive analysis of the impact of top rate variations, further strengthening the paper. Thank you again for your valuable feedback!\", \"reference\": \"[1] Yu L, Yu B, Yu H, et al. Language models are super mario: Absorbing abilities from homologous models as a free lunch[C]. ICML 2024.\\n\\n[2] Wei B, Huang K, Huang Y, et al. Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications[C]. ICML 2024.\"}", "{\"title\": \"Rebuttal by Authors (1/2)\", \"comment\": [\"We appreciate your acknowledgment of our comprehensive empirical evaluations across different LLM backbones and benchmarks. Additionally, we are glad to see that you find merit in our approach\\u2019s ability to balance safety with utility preservation, as compared to existing baselines. We will address your concerns in detail as follows.\", \"---\", \"> Weakness 1: the use of gradient ascent and descent techniques is not a truly novel contribution. The concept of patching the difference set of important parameters is still a relatively straightforward extension of existing ideas in parameter importance and model merging.\", \"We thank you for your insights and the opportunity to clarify our core contribution.\", \"**Regarding gradient ascent and descent techniques**\", \"Gradient ascent and descent techniques are indeed part of our framework, but they are **only a small component of SafePatching**, specifically used to generate the safety and over-safety patches. And proposing new unlearning or fine-tuning techniques lies outside the scope of our study. We did not intend to position these techniques as a primary contribution of our work. Instead, the focus of SafePatching is to provide **an efficient (`results in Table 1`) and effective (`results in Table 2 and 3`) solution** for achieving the three PSA objectives.\", \"In addition, we believe the interesting part in gradient ascent and descent is that we leverage **a single harmful dataset to generate both types of patches**, thereby avoiding additional data overhead. This efficient use of resources may enhance the framework\\u2019s practical utility in real-world applications.\", \"**Regarding patching the difference set of important parameters (controllable dual-patching)**\", \"For parameter importance score SNIP: In previous works, SNIP has been primarily used to analyze models\\u2019 safety-related behaviors. By contrast, we employ SNIP to actively enhance post-alignment performance, extending its original application to a new context that focuses on achieving balanced PSA goals.\", \"For model merging methods: We propose controllable dual-patching to **patch within the difference set of the most important parameter regions for each patch**, reducing conflicts between the safety and over-safety patches. Our experimental results in `Table 3` against state-of-the-art model merging methods demonstrate the effectiveness of our controllable dual-patching. And our analysis in `Figure 3` provides interesting insights into the distribution of these key parameters. Thus, both our experimental results and analytical studies demonstrate that SafePatching consistently outperforms direct applications of current model merging techniques in addressing PSA objectives.\", \"---\", \"> Weakness 2.1: The proposed method performs significantly worse than the baselines for a particular safety or over-safety benchmark.\", \"We thank you for pointing this observation and would like to clarify the strengths of SafePatching as demonstrated in our experimental results. Specifically:\", \"As shown in `Tables 2, 9, and 10`, across **all four backbone models**, our SafePatching achieves **top 3 performance** in safety and **top 2 performance** in over-safety consistently.\", \"In contrast, baselines that **achieve the best performance in either safety or over-safety tend to severely degrade the other aspect**, often causing significant trade-offs. For example, almost all current PSA methods optimized solely for safety, often leading to severe over-safety violations.\", \"These results highlight that **SafePatching achieves Pareto-optimal performance** and strikes a more balanced trade-off, making it uniquely effective for addressing all three PSA objectives simultaneously.\", \"---\", \"> Weakness 2.2: Moreover, the safety and over-safety improvements over the backbone model are quite marginal in some cases.\", \"We appreciate you feedback and would like to clarify that the improvements achieved by our method in terms of safety and over-safety are far from marginal.\", \"Safety: SafePatching demonstrates significant improvements in safety performance across all backbones:\", \"On LLaMA-2, safety improves by **67.22%**,\", \"On LLaMA-3 by **41.13%**,\", \"On Gemma by **71.37%**,\", \"And on Mistral by **84.60%**.\", \"Over-Safety: Our method also achieves notable reductions in over-safety, with improvements of:\", \"**55.33%** on LLaMA-2,\", \"**36.41%** on LLaMA-3,\", \"**41.17%** on Gemma,\", \"And **58.04%** on Mistral.\", \"Crucially, SafePatching is among the very few methods that effectively mitigate over-safety. **Almost no other baselines achieve any reduction in over-safety**; instead, they tend to significantly exacerbate the problem. For example, SafeDecoding leads to an over-safety rate increase from **8% to 80.8%** on the XSTest benchmark with LLaMA-2, demonstrating its failure to balance safety and over-safety.\", \"Thank you for your valuable comments. We will revise the manuscript to better highlight these gains.\"]}", "{\"title\": \"Rebuttal by Authors (1/2)\", \"comment\": \"We appreciate your positive feedback on the effectiveness and clarity of our SafePatching framework. Specifically, we are glad that you acknowledge the logical approach, timeliness of addressing a pressing safety issue, and the substantial experimental validation. We also appreciate the recognition that our approach to dual patching is innovative, as this simultaneous handling of both safety enhancement and over-safety mitigation represents a novel and impactful angle for LLM safety.\\n\\n---\\n\\n> Weakness 1: While the dual-patching approach is innovative in combining safety enhancement with over-safety mitigation, the core methods (e.g., gradient ascent and descent on harmful data) rely heavily on existing unlearning and fine-tuning techniques.\\n\\nAs you noted, the dual-patching approach in SafePatching is indeed our most innovative contribution and serves as the core of the framework. This approach is specifically designed to balance safety, over-safety, and general utility, achieving the best results across these dimensions, as evidenced by our experiments.\\n\\nIn contrast, the goal of SafePatching is not to propose new unlearning or fine-tuning techniques, as this lies outside the scope of our study. The elegance of such patch derivation process is that we leverage **a single harmful dataset to generate both types of patches**, thereby avoiding additional data overhead. This efficient use of resources may enhance the framework\\u2019s practical utility in real-world applications.\\n\\n---\\n\\n> Weakness 2 and Question 2: What requirements should a harmful dataset fulfill? Are these requirements realistic for SafePatching's application in real-world scenarios?\\n\\nThe requirements for the harmful dataset in SafePatching are relatively **flexible**. As noted in `lines 975 \\u2013 978` of our paper, SafePatching is trained on harmful data derived from inputs that have successfully bypassed the model\\u2019s defenses.\\n\\nObtaining such harmful data is feasible in real-world applications, as:\\n\\n- Even safety-aligned models can be prompted to produce harmful outputs [1].\\n\\n- Advanced automated red-teaming techniques are also now capable of efficiently identifying model vulnerabilities [2,3], helping to uncover harmful content without requiring proactive data collection. \\n\\nThus, this harmful data can accumulate naturally as vulnerabilities are discovered in the backbone, allowing SafePatching to serve as an effective **post-hoc remedy**.\\n\\nOur experimental results demonstrate that SafePatching significantly improves protection against both \\u201cSeen\\u201d and \\u201cUnseen\\u201d harmful data, illustrating its **generalization capabilities** applicable in real-world scenarios.\\n\\nAdditionally, we have further validated SafePatching\\u2019s effectiveness in a more realistic setting of **Continual Post Safety Alignment** `(Section 5.3)`, showing its adaptability and robustness for ongoing deployment.\", \"reference\": \"[1] Wei A, Haghtalab N, Steinhardt J. Jailbroken: How does llm safety training fail?[J]. Advances in Neural Information Processing Systems, 2023, 36.\\n\\n[2] Bai Y, Jones A, Ndousse K, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback[J]. arXiv preprint arXiv:2204.05862, 2022.\\n\\n[3] Perez E, Huang S, Song F, et al. Red Teaming Language Models with Language Models[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 3419-3448.\"}" ] }
09FiNmvNMw
Divide and Translate: Compositional First-Order Logic Translation and Verification for Complex Logical Reasoning
[ "Hyun Ryu", "Gyeongman Kim", "Hyemin S. Lee", "Eunho Yang" ]
Complex logical reasoning tasks require a long sequence of reasoning, which a large language model (LLM) with chain-of-thought prompting still falls short. To alleviate this issue, neurosymbolic approaches incorporate a symbolic solver. Specifically, an LLM only translates a natural language problem into a satisfiability (SAT) problem that consists of first-order logic formulas, and a sound symbolic solver returns a mathematically correct solution. However, we discover that LLMs have difficulties to capture complex logical semantics hidden in the natural language during translation. To resolve this limitation, we propose a Compositional First-Order Logic Translation. An LLM first parses a natural language sentence into newly defined logical dependency structures that consist of an atomic subsentence and its dependents, then sequentially translate the parsed subsentences. Since multiple logical dependency structures and sequential translations are possible for a single sentence, we also introduce two Verification algorithms to ensure more reliable results. We utilize an SAT solver to rigorously compare semantics of generated first-order logic formulas and select the most probable one. We evaluate the proposed method, dubbed CLOVER, on seven logical reasoning benchmarks and show that it outperforms the previous neurosymbolic approaches and achieves new state-of-the-art results.
[ "Logical Reasoning", "Large Language Models", "Neurosymbolic Approaches", "Semantic Decomposition", "Formal Language Verification" ]
Accept (Poster)
https://openreview.net/pdf?id=09FiNmvNMw
https://openreview.net/forum?id=09FiNmvNMw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zygggqM8Tm", "yeyZ2RU9pP", "yQMwgvFDdg", "xgPj7Xbrvb", "wQaavJN4EF", "uUMN9XbEmW", "tnnRHjHDJ8", "o6kdM9TOlc", "iwysaitNYo", "eRxUZZupYd", "dPgWBtl9Ke", "d1APLJfQEF", "avg4gfaTld", "WfEEpGJZnK", "McEMFFSEcM", "MbazXzrvAa", "JjUidjc2hS", "HqfJUEutLv", "E8KtpGB7r7", "8zOiz05OWg", "55f3FWt13U", "44MXUdz03S", "2NKrucYzpH" ], "note_type": [ "official_comment", "official_review", "meta_review", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732172777103, 1730211076543, 1734549587822, 1737523963119, 1732681676341, 1730146874496, 1732271454276, 1730699911120, 1732258540731, 1732191213369, 1732172735001, 1732172531111, 1732173972560, 1732758724944, 1732216234608, 1732172029360, 1732174768908, 1732572161976, 1732171519896, 1732171809085, 1732171585999, 1732191354077, 1733204527297 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Reviewer_yp79" ], [ "ICLR.cc/2025/Conference/Submission9134/Area_Chair_Xiaf" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Reviewer_W7Jx" ], [ "ICLR.cc/2025/Conference/Submission9134/Reviewer_yp79" ], [ "ICLR.cc/2025/Conference/Submission9134/Reviewer_5CZv" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Reviewer_yp79" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Reviewer_yp79" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Reviewer_yp79" ], [ "ICLR.cc/2025/Conference/Submission9134/Reviewer_W7Jx" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ], [ "ICLR.cc/2025/Conference/Submission9134/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal by Authors (3/3)\", \"comment\": \"Due to the space limit, we continue our answering in the following.\\n\\n**Q3. How much of the performance gain seen in CLOVER is due to a higher execution rate? I think expanding on how the metrics in Table 2 are computed would be helpful. For example is `Execution Acc = (correct_and_executable_programs / all)` or `Execution Acc = (correct_and_executable_programs / executable_programs)`.**\\n\\nA. To answer the reviewer's confusion on execution accuracy first, the latter one is correct. To clarify the metrics in Table 2, `Program Acc = (correct_and_executable_programs / all)`, `Execution Rate = (executable_programs / all)`, and `Execution Acc = (correct_and_executable_programs / executable_programs)`. These are briefly described in lines 417-419.\\nCLOVER improves syntactic and semantic correctness of translation which contributes to the increased execution rate and execution accuracy, and these two collaboratively contribute to the final performance gain.\"}", "{\"summary\": \"Authors propose a novel method of using LLMs to translate natural language descriptions into a set of first-order logical forms. This novel method decomposes this challenging task into two steps. The first step is to translate a long and complex sentence into a number of short sentences, the second step is to translate each short sentence into simple first-order logical forms and the connections between/among these short sentences into corresponding logical connectors. Experiments on seven benchmark datasets greatly outperform current SOTA level.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"It is reasonable to improve the translation quality by decomposing a complex sentence into several shorter sentences. Using SAT solvers certainly improve the quality.\", \"weaknesses\": \"Not all natural language sentences can be translated to first-order logic forms. Authors did not discuss what sentences cannot be translated.\\n\\nAuthors use symbolic SAT solver in evaluating and selecting correct first-order logical forms. This limits the method only for the case where SAT solvers work. \\n\\nTheoretically, the meaning of natural language is not logical formula. This work is valued within fixed benchmark datasets. \\n \\nThe formalism of the paper is not easy to read.\", \"questions\": \"1. line 115: \\\"To save computational cost, we compare each one of logically equivalent formulas\\\". You probably mean to \\\"compare each logically equivalent formula\\\". How can this save computational cost?\\n\\n2. Line 149: how to read this formula in natural language? \\n\\n3. What is the output for the sentence \\\"A barber shaves all who do not shave themselves.\\\"? \\n\\n4. How are \\\"Declarations\\\" created? \\n\\n5. How to decide a sentence not fit for your system? (or how to decide an unintended input sentence?)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The reviewers generally saw the merits of the proposal. There was general interest in the paper. The authors have responded to several issues raised by the reviewers. On reading the paper at a fairly high-level, it does appear interesting and novel. The authors in the rebuttal have clarified some of the issues raised by the reviewers. I hope these clarifications will be directly incorporated in the final version of the paper. I suggest borderline acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The reviewer who had the negative review did not quite respond to the authors. The authors gave a detailed explanation for the issues raised but the reviewer did not respond to those rebuttals as well. Hence I down weighted that reviewer's rating when recommending acceptance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We truly appreciate your response and would like to provide a brief reply to your comments.\\n\\n**Q1. Could you change API-costs to token counts?**\\n\\nA. We would also include token counts in the further revision.\\n\\n**Q2. Is there a way to make a fair comparison between Logic-LM and CLOVER?**\\n\\nA. Regarding the fair comparison of Logic-LM and CLOVER, instead of equalizing inference time costs, we use the same set of few-shot examples (line 408-409). To be specific, we derive the few-shot examples of CLOVER from those of Logic-LM. Since Logic-LM is hard to scale, we think our approach would be one of the reasonable ways to compare those two.\\n\\n**Q3. An extensive error analysis in the appendix would be beneficial.**\\n\\nA. We would also add extensive error analysis of CLOVER including the ones we show above in the further revision.\"}", "{\"summary\": \"Introduces a new algorithm, CLOVER, for solving logic questions in natural language, specifically by addressing the challenges in parsing natural language into the correct first-order logic so that an SAT solver can determine the answer. To do this, the paper proposes translating the question into smaller pieces that accumulate based on how each logic unit in natural language relates to other logic units until the resulting sentence is the final sentence that needs to be tested. Each accumulation from the previous step, including the final sentence, is translated into first-order logic, then the paper introduces two novel verification methods that check if the translations are valid and if there are contradictions in the final translation. The paper shows that with this accumulation strategy with their two verifiers, their method can outperform all baselines (including Logic-LM, a baseline that similarly translates sentences into FOL for SAT solvers) on common logic datasets like AR-LSAT, FOLIO, and ProofWriter.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The results are promising; the authors do a fantastic job of motivating their new algorithm CLOVER by showing common failures of previous methods like Logic-LM, then show that their method fixes many of these errors (leading to the performance boost reported in Table 1).\", \"The method here is pretty novel. Breaking down sentences into atoms isn't too novel, but I haven't seen someone decode them all individually and progressively (tapping into the auto-regressive natural of LMs) to improve the performance of the translation. The verification algorithms seem pretty intuitive (although they are described complexly), but again, despite being intuitive, I think they are fairly novel as well.\"], \"weaknesses\": [\"The ablations in Table 3 need to be explained more clearly and then discussed. What is \\\"Is Clover?\\\" and why is the simplest ablation (no clover, direct translation, no verification / essentially none of the new things introduced in this paper) outperforming Logic-LM on AR-LSAT by 10.9% already from Table 1? Does this mean that your direct translation prompt already improves over complex algorithms like Logic-LM? If so, this deflates the papers impact, so it should be addressed (it's possible I am missing something, but others will catch this too, so it's best to explain it away.)\", \"I believe the paper would benefit greatly from expanding on the models being evaluated; right now, only GPT-4o and GPT-4o-mini are evaluated. Showing that CLOVER consistently outperforms the baseline methods across model classes would improve the impact of this work.\", \"(minor point) There is no discussion of inference-time compute costs for CLOVER vs. the other baselines. I imagine the inference cost is significantly higher, but I am unsure how much. Is this negligible compared to Logic-LM? Is there a way to compare CLOVER with baselines that use the equivalent amount of compute during inference? I think much of this point could be explained away with a textual justification (i.e., this isn't possible, or the compute costs are nearly equivalent, etc.), but I do think it should be mentioned.\", \"(minor point) Clarity in section 3 could be improved. I would use the example in Figure 2 to clearly define each variable mentioned in the text to help readers follow your algorithm. For instance, defining with x^prep, T, phi_k, the mapping NL(phi), etc., with values from Figure 2 would help readers follow significantly. This could also be done in Figure 2 if you mark which parts of it are which variables. The text gets very dense with variables that are derived from other variables quickly; having these concrete instantiations really helps.\"], \"questions\": [\"I'm curious why the execution rate increases when using CLOVER. As I read the methods section, it looked like CLOVER primarily helps with execution accuracy, but I didn't see much about how it would help repair/fix/generate better code for the SAT solver.\", \"It's reported that \\\"CLOVER\\u2019s errors are primarily caused by preprocessing and other errors, which takes 78.6% of the total errors\\\", do you have examples of this? Is this an error in the accumulation stage? I think the paper does a great job of explaining where Logic-LM fails and why CLOVER is needed, but I think expanding on CLOVER errors is just as important to show where researchers can look next.\", \"How much of the performance gain seen in CLOVER is due to a higher execution rate (runnable code)? I think expanding on how the metrics in Table 2 are computed would be helpful. For example is `Execution Acc = (correct_and_executable_programs / all)` or `Execution Acc = (correct_and_executable_programs / executable_programs)`. The latter, I think, helps distinguish if you are generating better executable problems or if you are only improving the execution rate (which maybe there is a simple fix to Logic-LM to help it create better executable problems)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I updated my grade. I am not sure whether all translations were made by the system, as I cannot find the code and the data.\"}", "{\"summary\": \"The paper introduces CLOVER, an approach designed to enhance the translation of natural language logical problems into logical code, thereby improving the performance of language models on logical reasoning benchmarks. CLOVER achieves this by compositional translation of natural language into first-order logic and verification of logical semantics. The method involves parsing natural language sentences into logical dependency structures, translating these into first-order logic formulas, and employing verification algorithms to ensure accuracy. The authors demonstrate CLOVER's effectiveness on seven logical reasoning benchmarks, showing it outperforms previous neurosymbolic approaches.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper presents a new approach by breaking down compositional translation into smaller steps and combining it with verification for logical reasoning tasks, leading to improved performance in neurosymbolic translation.\", \"The experimental results are robust using GPT4-o, showing improvements over other methods across multiple benchmarks.\", \"The authors propose two SAT-based first-order logic verification algorithms for selecting a sample from LLMs' logical code generations.\"], \"weaknesses\": [\"The approach is primarily applicable to problems that can be represented in a SAT solver, limiting its generalizability to other reasoning datasets, such as those involving mathematical equations or visual components, e.g., MATH dataset.\", \"The core idea of breaking down tasks into subtasks and using multiple samples and tests (e.g., verification, self-reflection, deterministic tests) to select the best generation is not novel.\", \"The paper lacks comparison with chain-of-thought (CoT) based methods designed to improve implicit reasoning of language models, as in \\\"reliable reasoning beyond natural language\\\". These methods help the model extract information that is implied but not directly stated by interleaving natural language comments with logical code, and can alleviate the translation bottlenecks identified.\", \"The paper only reports results using one language model, making it unclear if the method would improve performance across different models and weakening the experimental results.\"], \"questions\": \"1. Is it possible to extend CLOVER to improve performance on tasks that involve reasoning with data formats beyond natural language, such as mathematical equations or visual reasoning tasks?\\n2. Can the authors provide more insights into how CLOVER compares with CoT-based methods designed for improving implicit reasoning of LLMs?\\n3. Why not test CLOVER on a wider range of language models to assess its generalizability?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"**Q1. Here, you need to use the descriptor \\u03b9 to denote those people, right?**\\n\\nA. Yes, the reviewer is correct. In the preprocessing step, to implement the declarations in z3, we instruct an LLM to assign an arbitrary number of those people. The estimated declarations through the preprocessing step are shown in the following.\\n```\\n# Declarations\\npeople = [barber, person_1, person_2, person_3]\\nshaves = Function([people, people] -> [bool])\\n```\\nFor simplicity, we omit this detail in our first response as follows:\\n> An LLM returns a theory $\\\\hat{T}$ that involves the following declarations: A sort named $\\\\textit{people}$, a predicate named $\\\\textit{shaves}$ of the type $\\\\textit{people} \\\\times \\\\textit{people}$, and a constant named $\\\\textit{barber}$ of sort $\\\\textit{people}$ (a constant is a function with zero arity).\\n\\n**Q2. Is this translation written by human experts, or by your system?**\\n\\nA. All the translations shown above are generated by our system. We use gpt-4o as a language model.\\n\\n**Q3. If a logical system contains inconsistent statements, this system will assert any statement as true. Why does the SAT output nothing, when the input has inconsistent statements?**\\n\\nA. The filtering in line 310-311 is a simple process for eliminating unsatisfiable formulas using a SAT solver. Since an inconsistent formula is unsatisfiable, it is filtered out during this process. For better understanding, we show a pseudocode of this process in the following.\\n```\\n# Formula\\nf = ForAll([p:people], Not(shaves(p, p)) == shaves(barber, p))\\n\\n# Checking satisfiability\\nsolver = Solver()\\nsolver.add(f)\\nprint(solver.check() == sat)\\n```\\nHere, the code returns `False` which means the formula is unsatisfiable, so we filtered out this formula.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We would like to refer SatLM [1], which covers logical reasoning problems by using a SAT solver. In Section 2 of this paper, there is a following description:\\n> First, because the SAT solver is *sound* (i.e., any assignment it produces satisfies the formula), the solution is correct by construction. Thus, assuming that the parsing is correct and $\\\\hat{\\\\Phi}$ and $\\\\hat{Q}$ match $\\\\Phi$ and $Q$, we have a proof that the solution is indeed correct.\\n\\nHere, $\\\\hat{\\\\Phi}$ and $\\\\hat{Q}$ refer to the estimated formulas of constraints and a query, and $\\\\Phi$ and $Q$ refer to the ground truth formulas.\\n\\n---\\n[1] Satlm: Satisfiability-aided language models using declarative prompting. Ye et al., 2024.\"}", "{\"title\": \"Rebuttal by Authors (2/3)\", \"comment\": \"Due to the space limit, we continue our answering in the following.\\n\\n**W3. Can you discuss inference-time compute costs for CLOVER vs. the other baselines?**\\n\\nA. It is difficult to measure inference time costs for methods that use LLMs with API calls. Specifically, inference time significantly depends on the current network traffic of an API, and the number of parameters are unknown. Despite this limitation, we compare CLOVER and the baselines by their API usage costs, which is a reliable way to measure inference time costs.\\n\\nFor comparison, we use gpt-4o-mini as a language model and measure the costs on the AR-LSAT annotated subset. Standard prompting and CoT prompting both cost 0.02 USD, Logic-LM costs 0.08 USD, SymbCoT costs 0.15 USD, and CLOVER costs 0.30 USD. CLOVER requires larger amount of inference costs compared to the baselines since the compositional first-order logic translation generates formulas for each logical dependency structure of a target sentence. However, we think that the increased inference time cost is worth for the significant performance gain in Table 1. We add this disscusion in Appendix H.\\n\\n**W4. Clarity in section 3 could be improved.**\\n\\nA. Following the reviewer's feedback, we add corresponding mathematical notations of Section 3 to Figure 2.\\n\\n**Q1. Why does the execution rate increase when using CLOVER?**\\n\\nA. We find out that direct translations of complex logical sentences using Logic-LM often include syntax errors. In contrast, CLOVER reduces these errors by initiating translation from atomic subsentences, which improves the execution rate. \\nFor example, a theory includes declarations of two types of sorts, $\\\\textit{families}$ and $\\\\textit{buildings}$, a predicate named $\\\\textit{owned}$ of type $\\\\textit{families} \\\\times \\\\textit{buildings}$, and constants named $\\\\textit{inn}$, $\\\\textit{mill}$, and $\\\\textit{forge}$ of sort $\\\\textit{buildings}$. The target sentence is \\\"Neither the inn nor the mill belonged to the owner of the forge.\\\". Logic-LM translates this sentence as $(\\\\forall b : \\\\textit{buildings})\\\\, (\\\\textit{owned}(\\\\textit{forge}, b) \\\\rightarrow \\\\lnot(\\\\textit{owned}(\\\\textit{inn}, b) \\\\lor \\\\textit{owned}(\\\\textit{mill}, b)))$. This formula has a syntax error since the predicates include mismatched sort inputs (i.e., $\\\\textit{inn}$, $\\\\textit{mill}$, and $\\\\textit{forge}$ are not constants of sort $\\\\textit{families}$).\\nHowever, CLOVER first translates an atomic subsentence \\\"All families are the owner of the forge.\\\" into $(\\\\forall f : \\\\textit{families})\\\\, (\\\\textit{owned}(f, \\\\textit{forge}))$, sequentially translates other subsentences, and finally translates the target sentence into $(\\\\forall f : \\\\textit{families})\\\\, (\\\\textit{owned}(f, \\\\textit{forge}) \\\\rightarrow \\\\lnot(\\\\textit{owned}(f, \\\\textit{inn}) \\\\lor \\\\textit{owned}(f, \\\\textit{mill})))$, which is both syntactically and semantically correct.\\n\\n**Q2. Do you have examples for the CLOVER's errors?**\\n\\nA. Most of CLOVER's errors on the AR-LSAT annotated subset are caused by incorrect preprocessing (the first equation of Eq. 2), which consists of incorrect target sentence generation and incorrect theory estimation.\\nFor example, an original logical reasoning problem includes the following sentence: \\\"There are three open positions on the appellate court and six open positions on the trial court, but not all of them will be filled at this time.\\\". During the preprocessing, an LLM mistakenly omits the subsentence \\\"but not all of them will be filled at this time.\\\" and generates the target sentence as \\\"There are three open positions on the appellate court and six open positions on the trial court.\\\", which degrades the meaning of the original context. \\nIn another example, an estimated theory includes declarations of three types of sorts, $\\\\textit{speakers}$, $\\\\textit{rooms}$, and $\\\\textit{times}$, and a function named $\\\\textit{speech}$ of type $\\\\textit{speakers} \\\\rightarrow \\\\textit{rooms} \\\\times \\\\textit{times}$. Here, the function declaration is incorrect since a function in first-order logic can only return a single type of sort. \\nInstead of a single LLM inference for preprocessing, put much work on target sentence generation and theory estimation would greatly benefit the overall performance.\"}", "{\"title\": \"Rebuttal by Authors (1/3)\", \"comment\": \"We thank the reviewer for the constructive and detailed comments. **Q** denotes a Question, and **W** denotes a Weakness not addressed in the Questions.\\n\\n**W1. In Table 3, what is \\\"Is Clover?\\\" and why is the simplest ablation outperforming Logic-LM on AR-LSAT?**\\n\\nA. To answer the first question, \\\"Is Clover?\\\" means if a method is the proposed CLOVER or if it is one of the ablations. In Table 3, the last two rows correspond to CLOVER, which use both compositional translation and verification. In contrast, the first three rows correspond to the ablations, which do not use either compositional translation (i.e., use direct translation) or verification or both. To clarify this, we add related captions in Table 3.\\n\\nTo answer the second question, the result that the simplest ablation outperforming Logic-LM only applies for the AR-LSAT dataset, and it comes from the specialized data characteristics. Unlike other datasets, we additionally need to predict solver functions according to the question of the logical reasoning problem. For instance, if the question is \\\"Which of the queries CAN be true?\\\", then we need to assign a function that checks a *satisfiability* of the query given the constraints. If the question is \\\"Which of the queries MUST be true?\\\", then we need to assign a function that checks a *validity* of the query given the constraints. \\nLogic-LM predicts a solver function together with the first-order logic translation by a single inference in Eq. 1. For CLOVER, to incorporate solver function prediction in our problem formulation in Eq. 2, we perform this prediction at the preprocessing step. Comparing those two solver predictions, the latter one has significantly less load to an LLM since the task gets much simpler. Therefore, using the latter one improves accuracy to predict solver functions, where 10.9% gain comes from this. To support this analysis, we compare performance of the simplest ablation and Logic-LM on the three other datasets in the following table.\\n| | Logic-LM | simplest ablation |\\n|------|:------:|:------:|\\n| ZebraLogic | 45.4 | 45.4 |\\n| Puzzle | 64.0 | 64.0 |\\n| Symbol | 81.8 | 80.8 |\\n\\nThe results indicate that the simplest ablation and Logic-LM have almost the same performance on the other datasets. This is because the other datasets use a single solver function, where there is no need for solver function predictions. We add the details of the solver function prediction on AR-LSAT in Appendix E. Overall, the performance gain on AR-LSAT is a side effect of the problem formulation.\\n\\n**W2. I believe the paper would benefit greatly from expanding on the models being evaluated.**\\n\\nA. We include the answer to Q3 of the reviewer 5CZv in the following.\\n\\n> In this paper, we evaluate CLOVER on two language models, gpt-4o and gpt-4o-mini (The results using gpt-4o-mini are presented in Table 5, Appendix G). For further evaluation, we include additional two language models including gpt-3.5-turbo and gpt-3.5-turbo-instruct, which are another chat-focused model and an instruction-following model, respectively. As in Table 5, we compare performance of CLOVER and neurosymbolic approach baselines on the Puzzle and Symbol datasets. If the symbolic solver cannot execute the solution, then we take random guesses. We exclude the performance of SymbCoT using gpt-3.5-turbo-instruct since the prompt including few-shot examples exceeds the context window of the language model. The following results show that CLOVER clearly outperforms the baselines across different language models. We add this results in Appendix G.\\n> Due to the limitation of computational resources, we mainly use OpenAI models, but we will expand our evaluation on other proprietary models and open-sourced models.\\n> \\n> Performance on Puzzle dataset using different language models.\\n> | Puzzle | Logic-LM | SymbCoT | CLOVER |\\n> |------|:------:|:------:|:------:|\\n> | gpt-4o-mini | 42.5 | 60.0 | **60.5** |\\n> | gpt-3.5-turbo | 42.5 | 35.0 | **63.5** |\\n> | gpt-3.5-turbo-instruct | 46.0 | N/A | **59.0** |\\n> \\n> Performance on Symbol dataset using different language models.\\n> | Symbol | Logic-LM | SymbCoT | CLOVER |\\n> |------|:------:|:------:|:------:|\\n> | gpt-4o-mini | 38.4 | 46.5 | **71.7** |\\n> | gpt-3.5-turbo | 24.2 | 27.3 | **60.6** |\\n> | gpt-3.5-turbo-instruct | 50.5 | N/A | **70.7** |\"}", "{\"comment\": \"> a sound SAT solver could investigate if two formulas are logically equivalent, or find counter-interpretations of the two formulas which are then used for disproving.\\n\\nI cannot totally agree with such a description of the \\\"soundness\\\" of a SAT solver. Could you please add some reference papers?\"}", "{\"title\": \"Reminder for Paper Discussion\", \"comment\": \"Dear Reviewer 5CZv,\\n\\nWe appreciate the time and effort you have dedicated to reviewing our paper. We hope our responses and additional results have addressed your concerns. If you have any further questions or suggestions, we would be grateful to hear them. Thank you once again for your valuable feedback throughout this process!\\n\\nBest, \\nAuthors\"}", "{\"comment\": \"Here, you need to use the descriptor &iota; to denote those people, right?\\n\\nIs this translation written by human experts, or by your system? \\n\\nIf a logical system contains inconsistent statements, this system will assert any statement as true. Why does the SAT output nothing, when the input has inconsistent statements?\"}", "{\"title\": \"Rebuttal by Authors (2/2)\", \"comment\": \"Due to the space limit, we continue our answering in the following.\\n\\n**Q1. Line 115: \\\"To save computational cost, we compare each one of logically equivalent formulas\\\". You probably mean to \\\"compare each logically equivalent formula\\\". How can this save computational cost?**\\n\\nA. To clarify the meaning of the sentence in line 115, here is an explanation using an example in Figure 2. After compositional first-order logic translation, there are six candidate formulas. For disproving by counter-interpretation, we should compare all the possible 15 pairs of six formulas. To save computational cost, we first group logically equivalent formulas which in fact there are only two logically different formulas, and then we could compare a single pair of the two formulas.\\n\\n**Q2. Line 149: How to read this formula in natural language?**\\n\\nA. For intuitive explanation, Eq. 1 means that in prior neurosymbolic approaches [1, 2, 3], an LLM translates a logical reasoning problem $x$ into a theory $\\\\hat{T}$ and pairs of first-order logic formula and its natural language description.\\nNote that a theory $\\\\hat{T}$ includes 1) declarations of sorts, functions, and predicates and 2) the most commonly applied theories (e.g., theory of equality, arithmetic, etc.). For simplicity, we presume that a theory $\\\\hat{T}$ always incorporates the most commonly applied theories.\\n\\n**Q3. What is the output for the sentence \\\"A barber shaves all who do not shave themselves.\\\"?**\\n\\nA. Following our problem formulation, we first run a preprocessing step as in Eq. 2. An LLM returns a theory $\\\\hat{T}$ that involves the following declarations: A sort named $\\\\textit{people}$, a predicate named $\\\\textit{shaves}$ of the type $\\\\textit{people} \\\\times \\\\textit{people}$, and a constant named $\\\\textit{barber}$ of sort $\\\\textit{people}$ (a constant is a function with zero arity). An LLM also returns a target sentence same as the original sentence.\\n\\nNow, we perform compositional first-order logic translation to translate the target sentence under the estimated theory.\\n1. Logical Dependency Parsing \\nStructure \\\\#1: \\nU1=\\\"A barber shaves all\\\", D1=\\\"who do not shave themselves\\\" \\nD1 -> U1 \\nStructure \\\\#2: \\nU1=\\\"A barber shaves all\\\", U2=\\\"People do not shave themselves\\\", C1=\\\"(merge)\\\" \\nU1 -> C1, U2 -> C1\\n\\n2. Component Accumulation \\nFor Structure \\\\#1: \\n1\\\\) A barber shaves all. \\n2\\\\) A barber shaves all who do not shave themselves. \\nFor Structure \\\\#2: \\n1\\\\) A barber shaves all. \\n2\\\\) People do not shave themselves. \\n3\\\\) A barber shaves all who do not shave themselves.\\n\\n3. Sequential Translation \\nFor Structure \\\\#1: \\n1\\\\) $(\\\\forall p : \\\\textit{people})\\\\, (\\\\textit{shaves}(\\\\textit{barber}, p))$ \\n2\\\\) $(\\\\forall p : \\\\textit{people})\\\\, (\\\\lnot \\\\textit{shaves}(p, p) \\\\rightarrow \\\\textit{shaves}(\\\\textit{barber}, p))$ (Formula \\\\#1) \\nFor Structure \\\\#2: \\n1\\\\) $(\\\\forall p : \\\\textit{people})\\\\, (\\\\textit{shaves}(\\\\textit{barber}, p))$ \\n2\\\\) $(\\\\forall p : \\\\textit{people})\\\\, (\\\\lnot \\\\textit{shaves}(p, p))$ \\n3\\\\) $(\\\\forall p : \\\\textit{people})\\\\, (\\\\lnot \\\\textit{shaves}(p, p) \\\\rightarrow \\\\textit{shaves}(\\\\textit{barber}, p))$ (Formula \\\\#2)\\n\\nLastly, we perform first-order logic verification to select the most probable formula.\\nAs described in line 310-311, we first filter out $\\\\hat{T}$-*unsatisfiable* formulas using a SAT solver. Since Formula \\\\#1 and \\\\#2 are both $\\\\hat{T}$-*satisfiable*, those are not filtered out. Those two formulas are syntactically the same, so we do not have to further proceed the verification. The output formula is $(\\\\forall p : \\\\textit{people})\\\\, (\\\\lnot \\\\textit{shaves}(p, p) \\\\rightarrow \\\\textit{shaves}(\\\\textit{barber}, p))$.\\n\\n**Q4. How are \\\"Declarations\\\" created?**\\n\\nA. The first equation in Eq. 2 (line 170) shows how declarations are generated. According to the equation, an LLM takes a logical reasoning problem $x$ as a input and generates a theory $\\\\hat{T}$ and a set of target natural language sentences. A theory $\\\\hat{T}$ corresponds to the \\\"Declarations\\\".\\n\\n**Q5. How to decide a sentence not fit for your system? (or how to decide an unintended input sentence?)**\\n\\nA. The first equation in Eq. 2 (line 170) also decides which sentence to translate. As the reviewer points out, a logical reasoning problem often contains sentences for declarations or even no information to solve the problem. To resolve this issue, we add a preprocessing step before applying CLOVER. Other neurosymbolic approaches [1, 2] decide which sentence to translate as in Eq. 1.\\n\\n---\\n[1] Satlm: Satisfiability-aided language models using declarative prompting. Ye et al., 2024. \\n[2] Logic-lm: Empowering large language models with symbolic solvers for faithful logical reasoning. Pan et al., 2023. \\n[3] Linc: A neurosymbolic approach for logical reasoning by combining language models with first-order logic provers. Olausson et al., 2023. \\n[4] Faithful logical reasoning via symbolic chain-of-thought. Xu et al., 2024.\"}", "{\"comment\": \"> Formula #1 and #2 (which are syntactically the same) are both filtered out\\n\\nAccording to which rules, are both Formula #1 and #2 filtered out? \\n\\nMy opinion, both structures are not correctly translated.\"}", "{\"title\": \"Thank you for the reply!\", \"comment\": \"> For comparison, we use gpt-4o-mini as a language model and measure the costs on the AR-LSAT annotated subset. Standard prompting and CoT prompting both cost 0.02 USD, Logic-LM costs 0.08 USD, SymbCoT costs 0.15 USD, and CLOVER costs 0.30 USD. CLOVER requires larger amount of inference costs compared to the baselines since the compositional first-order logic translation generates formulas for each logical dependency structure of a target sentence. However, we think that the increased inference time cost is worth for the significant performance gain in Table 1. We add this disscusion in Appendix H.\\n\\nCould you change this to token counts? (API-costs change over time). But otherwise, this sounds very nice, and I appreciate the authors looking into it. From the table and explanation given, it looks like CLOVER is 3.75 times more expensive than the leading baseline (Logic-LM). Is there a way to make this comparison more equal? I am not entirely familiar with how Logic-LM could be scaled, but it would be nice to show a fair comparison between the two baselines, which would strengthen CLOVER's results, in my opinion. \\n\\n>Most of CLOVER's errors on the AR-LSAT annotated subset are caused by incorrect preprocessing (the first equation of Eq. 2), ...\\n\\nThanks for showing some of them. I believe an extensive error analysis in the appendix (similar to how Logic-LM was done) would be beneficial. Specifically showing how CLOVER fails on these examples, where CLOVER fails to outperform Logic-LM or other baselines, etc. I didn't see them mentioned in the paper (but I could have missed them).\\n\\nI am upgrading my score to a 6 in light of these clarifications and changes.\"}", "{\"title\": \"Rebuttal by Authors (1/2)\", \"comment\": \"We thank the reviewer for the constructive and detailed comments. **Q** denotes a Question, and **W** denotes a Weakness not addressed in the Questions.\\n\\n**W1. The core idea of breaking down tasks into subtasks, using multiple samples, and verifying to select the best generation is not novel.**\\n\\nA. We would like to say that breaking down tasks into smaller ones and selecting the best generation is not our novelty. The novelty of this paper comes from how we break down logical sentences while preserving underlying logical structures and how we select the best first-order logic formula by fully leveraging the first-order logic semantics. To achieve the former one, we newly define a semantic parsing method called logical dependency parsing, and to achieve the latter one, we proposes two verification algorithms using a SAT solver. These contributions are summarized in the last paragraph of Introduction.\\n\\n**Q1. The approach is primarily applicable to problems that can be represented in a SAT solver, limiting its generalizability to other reasoning datasets, such as mathematical equations or visual reasoning tasks. Is it possible to extend CLOVER on these tasks?**\\n\\nA. We would like to emphasize that logical reasoning, where the problems are formally represented as a first-order logic, is one major category of reasoning. [2] states that logical reasoning plays a central role in intelligent systems for problem-solving, decision-making, and critical thinking. This paper and other prior works [1, 2, 3, 4] focus on enhancing the logical reasoning ability of LLMs. The other reasoning problems which require math or visual reasoning are out of the scope of this line of works.\\n\\n**Q2. The paper lacks comparison with CoT-based methods designed to improve implicit reasoning of language models [6]. Can the authors provide more insights into how CLOVER compares with these methods?**\\n\\nA. To clarify the meaning of CoT-based methods in the reviwer's question, we prepare two different answers. \\nIf the reviewer means CoT-based methods as extracting information (i.e., natural language comments with logical code) that is implied but not directly stated, SatLM [1], Logic-LM [2], and SymbCoT [4] belongs to the methods designed to improve implicit reasoning of language models as in [6]. Specfically, [1] and [2] shares nearly the same method with [6] where the main difference comes from what symbolic solvers they use. CoT [5] also extracts implicit information in natural language. We do compare CLOVER with these methods in Table 1, and the results show that CLOVER clearly outperforms all these methods. For CLOVER, the preprocessing step (first equation of Eq. 2) includes extracting natural language sentences that are implied but not directly stated.\\n\\nIf the reviewer means CoT-based methods as CoT as is (i.e., step-by-step implicit reasoning through natural language), we want to emphasize the advantage of neurosymbolic approaches compared to CoT. While CoT falls short in complex logical reasoning tasks which need long sequence of reasoning, the neurosymbolic approaches including CLOVER resolve this issue by integrating a sound symbolic solver (line 39-48).\"}", "{\"title\": \"Rebuttal by Authors (1/2)\", \"comment\": \"We thank the reviewer for the constructive and detailed comments. **Q** denotes a Question, and **W** denotes a Weakness not addressed in the Questions.\\n\\n**Misunderstanding in Summary**\\n\\nWe observe that there is a slight misunderstanding on the reviewer's summary in the following:\\n\\n> The first step is to translate a long and complex sentence into a number of short sentences, the second step is to translate each short sentence into simple first-order logical forms and the connections between/among these short sentences into corresponding logical connectors.\\n\\nTo clarify our method, the proposed compositional first-order logic translation consists of three steps. The first step is to parse a target sentence into logical dependency structures, the second step is to progressively accumulate each component of the structure while preserving its logical dependency, and the last step is to sequentially translate starting from the atomic subsentence until the target sentence. \\nNot only that, the proposed first-order logic verification algorithms are another novelty of this paper. To fully leverage the semantics of first-order logic, 1) we select the most frequent logically equivalent formulas (Logical Consistency), or 2) we disprove one of two formulas by judging if a counter-interpretation satisfies the target sentence. These are visually summarized in Figure 2.\\n\\n**W1. Authors did not discuss what sentences cannot be translated to first-order logic forms. This work is valued within fixed benchmark datasets where the meaning of natural language is a logical formula.**\\n\\nA. We would like to emphasize that logical reasoning, where the problems are formally represented as a first-order logic, is one major category of reasoning. [2] states that logical reasoning plays a central role in intelligent systems for problem-solving, decision-making, and critical thinking. This paper and other prior works [1, 2, 3, 4] focus on enhancing the logical reasoning ability of LLMs. The other reasoning problems such as reading comprehension are out of the scope of this line of works.\\n\\n**W2. Authors use a SAT solver in evaluating and selecting correct first-order logical forms. This limits the method only for the case where SAT solvers work.**\\n\\nA. As we state in the answer of W1, every logical reasoning problem could be translated into first-order logic, which forms a SAT problem consists of a theory, constraints, and a query. Then, a sound SAT solver could investigate if two formulas are logically equivalent, or find counter-interpretations of the two formulas which are then used for disproving.\\n\\n**W3. The formalism of the paper is not easy to read.**\\n\\nA. We describe our problem formulation and method with mathematical notations to precisely explain those. To make those easy to read, we add corresponding mathematical notations to Figure 2.\"}", "{\"title\": \"Rebuttal by Authors (2/2)\", \"comment\": \"Due to the space limit, we continue our answering in the following.\\n\\n**Q3. The paper only reports results using one language model. Can the authors evaluate CLOVER on a wider range of language models?**\\n\\nA. In this paper, we evaluate CLOVER on two language models, gpt-4o and gpt-4o-mini (The results using gpt-4o-mini are presented in Table 5, Appendix G). For further evaluation, we include additional two language models including gpt-3.5-turbo and gpt-3.5-turbo-instruct, which are another chat-focused model and an instruction-following model, respectively. As in Table 5, we compare performance of CLOVER and neurosymbolic approach baselines on the Puzzle and Symbol datasets. If the symbolic solver cannot execute the solution, then we take random guesses. We exclude the performance of SymbCoT using gpt-3.5-turbo-instruct since the prompt including few-shot examples exceeds the context window of the language model. The following results show that CLOVER clearly outperforms the baselines across different language models. We add this results in Appendix G.\\nDue to the limitation of computational resources, we mainly use OpenAI models, but we will expand our evaluation on other proprietary models and open-sourced models.\\n\\nPerformance on Puzzle dataset using different language models.\\n| Puzzle | Logic-LM | SymbCoT | CLOVER |\\n|------|:------:|:------:|:------:|\\n| gpt-4o-mini | 42.5 | 60.0 | **60.5** |\\n| gpt-3.5-turbo | 42.5 | 35.0 | **63.5** |\\n| gpt-3.5-turbo-instruct | 46.0 | N/A | **59.0** |\\n\\nPerformance on Symbol dataset using different language models.\\n| Symbol | Logic-LM | SymbCoT | CLOVER |\\n|------|:------:|:------:|:------:|\\n| gpt-4o-mini | 38.4 | 46.5 | **71.7** |\\n| gpt-3.5-turbo | 24.2 | 27.3 | **60.6** |\\n| gpt-3.5-turbo-instruct | 50.5 | N/A | **70.7** |\\n\\n---\\n[1] Satlm: Satisfiability-aided language models using declarative prompting. Ye et al., 2024. \\n[2] Logic-lm: Empowering large language models with symbolic solvers for faithful logical reasoning. Pan et al., 2023. \\n[3] Linc: A neurosymbolic approach for logical reasoning by combining language models with first-order logic provers. Olausson et al., 2023. \\n[4] Faithful logical reasoning via symbolic chain-of-thought. Xu et al., 2024. \\n[5] Chain-of-thought prompting elicits reasoning in large language models. Wei et al., 2022. \\n[6] Reliable reasoning beyond natural language. Borazjanizadeh & Piantadosi, 2024.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"If we translate the reviewer's sentence, \\\"A barber shaves all who do not shave themselves.\\\", the output formula is $(\\\\forall p : \\\\textit{people})\\\\, (\\\\lnot \\\\textit{shaves}(p, p) \\\\rightarrow \\\\textit{shaves}(\\\\textit{barber}, p))$. In the above answer, we make a slight mistake in the verification that the formula is $\\\\hat{T}$-satisfiable since the sentence is satisfied for the interpretation that the barber shaves all people including itself, which the formula is not filtered out (line 310-311). We revise our answer accordingly.\\nIn addition, for better understanding, since the sentence comes from the barber paradox, we also translate the original sentence of the barber paradox, \\\"A barber shaves all those, and those only, who do not shave themselves.\\\". The output formula is $(\\\\forall p : \\\\textit{people})\\\\, (\\\\lnot \\\\textit{shaves}(p, p) \\\\Leftrightarrow \\\\textit{shaves}(\\\\textit{barber}, p))$. In this case, since there is no satisfiable interpretation (i.e., $\\\\hat{T}$-unsatisfiable), the formula is filtered out (line 310-311). To ensure these results, we verify the formulas using a SAT solver. If the reviewer has a different opinion about the translation results, we would like to kindly ask the reviwer's opinion.\"}", "{\"title\": \"Reminder for Paper Discussion\", \"comment\": \"Dear Reviewer 5CZv,\\n\\nAs the discussion period deadline approaches, we hope we have addressed your concerns regarding the novelty of our approach, its generalizability to other reasoning tasks, comparisons with CoT-based methods, and evaluations on additional language models.\\nWe would greatly appreciate it if you could provide a response to our rebuttal. Please let us know if you have any further questions or concerns!\\n\\nBest, \\nAuthors\"}" ] }
08FCLXDY3S
Augmentation-Driven Metric for Balancing Preservation and Modification in Text-Guided Image Editing
[ "Yoonjeon Kim", "Soohyun Ryu", "Yeonsung Jung", "Hyunkoo Lee", "Joowon Kim", "June Yong Yang", "Jaeryong Hwang", "Eunho Yang" ]
The development of vision-language and generative models has significantly advanced text-guided image editing, which seeks \textit{preservation} of core elements in the source image while implementing \textit{modifications} based on the target text. However, in the absence of evaluation metrics specifically tailored for text-guided image editing, existing metrics are limited in their ability to balance the consideration of both preservation and modification. Especially, our analysis reveals that CLIPScore, the most commonly used metric, tends to favor modification, resulting in inaccurate evaluations. To address this problem, we propose \texttt{AugCLIP}, a simple yet effective evaluation metric that balances preservation and modification. \texttt{AugCLIP} begins by leveraging a multi-modal large language model (MLLM) to augment detailed descriptions that encapsulate visual attributes from the source image and the target text, enabling the incorporation of richer information. Then, \texttt{AugCLIP} estimates the modification vector that transforms the source image to align with the target text with minimum alteration as a projection into the hyperplane that separates the source and target attributes. Additionally, we account for the relative importance of each attribute considering the interdependent relationships among visual attributes. Our extensive experiments on five benchmark datasets, encompassing a diverse range of editing scenarios, demonstrate that \texttt{AugCLIP} aligns remarkably well with human evaluation standards compared to existing metrics. The code for evaluation will be open-sourced to contribute to the community.
[ "evaluation metric", "text-guided image editing", "multi-modal representation" ]
https://openreview.net/pdf?id=08FCLXDY3S
https://openreview.net/forum?id=08FCLXDY3S
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vNqykaVjGe", "sVetMsRftZ", "qSzA2nbdnS", "lRPtz2UnGr", "gdy7eezcrx", "dyiVvcLWbk", "dNrYZzRvci", "cLlxZmcpHD", "GJNSnl5esX" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "comment" ], "note_created": [ 1731651227375, 1730293845891, 1730613115028, 1731651355678, 1730557889063, 1731650941258, 1731651404624, 1730566624344, 1731651654697 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3038/Authors" ], [ "ICLR.cc/2025/Conference/Submission3038/Reviewer_QH3u" ], [ "ICLR.cc/2025/Conference/Submission3038/Reviewer_dg3W" ], [ "ICLR.cc/2025/Conference/Submission3038/Authors" ], [ "ICLR.cc/2025/Conference/Submission3038/Reviewer_zaUq" ], [ "ICLR.cc/2025/Conference/Submission3038/Authors" ], [ "ICLR.cc/2025/Conference/Submission3038/Authors" ], [ "ICLR.cc/2025/Conference/Submission3038/Reviewer_LwcK" ], [ "ICLR.cc/2025/Conference/Submission3038/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you very much for your thoughtful and detailed reviews. We appreciate your insights and would like to address the points raised:\\n$\\\\textbf{Overclaim of contribution}$ We have not overclaimed the contribution, since we first analyzed the shortcomings of \\u201cdirectional CLIP similarity\\u201d specifically in the context of \\u201ctext-guided image editing\\u201d, not the shortcomings of \\\"CLIP-score\\\". As the terminology is confusing, we will revise the paper.\\n\\n$\\\\textbf{Usage of cocktail metrics are already shown to be ineffective in the paper}$ Combination of existing metrics have been demonstrated in Appendix D.1. FID is not measured in a sample-wise manner, which is impossible be applied in our experiments. I hope this clarifies our experiments.\\n\\n$\\\\textbf{Misunderstanding in Figure 1, 2}$ Figure 1 is to show excessive modification, not preservation. Also, Figure 2 shows how directional CLIP similarity attend to edited regions, which is irrelevant with calculation of cosine(f_source, f_edit). We hope this clarifies our intention.\\n\\n$\\\\textbf{Weighting strategy}$ The weights derived from Eq. 4 and 5 are applied as weightings for each attribute in the hyperplane optimization process. We will revise the paper to explicitly explain this procedure.\\n\\nWe are grateful for your valuable feedback and will make the necessary revisions to enhance the clarity and quality of our paper. Thank you once again for your time and insights.\"}", "{\"summary\": \"This paper introduces AugCLIP, a novel evaluation metric for text-guided image editing that balances both preservation of the source image and modification toward the target text. By leveraging a multi-modal large language model to extract fine-grained visual attributes and applying a hyperplane-based optimization approach, AugCLIP estimates a representation of a well-edited image that closely aligns with human evaluators\\u2019 preferences. Extensive experiments across five benchmark datasets demonstrate AugCLIP\\u2019s superior alignment with human judgments compared to existing metrics, particularly in challenging editing tasks. Consequently, AugCLIP offers a significant advancement in the evaluation of textguided image editing, providing a more nuanced and reliable approach for assessing modifications while maintaining core image attributes. This metric holds promise for broader applications in personalized image editing and other vision-language tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces a novel evaluation metric for text-guided image editing that balances both preservation of the source image and modification toward the target text. It demonstrates remarkable improvement in alignment with human evaluators on diverse editing scenarios such as object, attribute, style alteration compared to all other existing metrics. Moreover, the metric is applicable to personalized generation, DreamBooth dataset, where the objective is to identify the source object in provided image, and generate into a completely novel context. This shows the flexibility of AugCLIP, that seamlessly apply to variety of editing directions. Notably, the metric excels in identifying minor differences between the source image and the edited image, showing superb ability in complex image editing scenarios such as MagicBrush. The major contributions are summarized as follows.\", \"This paper is the first to point out CLIPScore\\u2019s reliability in text-guided image editing, as it frequently exhibits a bias towards modification rather than preservation and focuses on irrelevant regions.\", \"This work proposes AugCLIP, a metric for image editing by automatically augmenting descriptions via LLM and estimating a balanced representation of preservation and modification, which takes into account the relative importance of each description.\", \"In the experimental evaluations, AugCLIP demonstrates a significantly high correlation with human evaluations across various editing scenarios, even in complex applications where existing metrics struggle.\"], \"weaknesses\": [\"Overall this work makes an interesting and meaningful observation about the widely used CLIPScore metric, however there are still some concerns:\", \"**Discussion on broader indicators.** This work highlights the problem of CLIPScore in the problem analysis in Section 3. Do other quantitative indicators such as FID and LPIPS have similar problems? Please give a more comprehensive analysis.\", \"**Suitability for complex editing instructions or tasks.** There are many kinds of image editing tasks, including global editing such as style editing rather than just local editing. How does AugCLIP perform in this case?\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the author proposes a new metric called AugClip, for Text-guided Image Editing.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The author shows the disadvantage of ClipScore, and try to design a new one.\\n\\n1. The key question is that we can use fusion of other existing metric without introducing a new one.\\n2. The formulation of clipscore is not common.\", \"weaknesses\": \"1. Some notations are confusing.\\nT sometimes is text, while T sometimes is the set. \\n\\n2. Overclaims in contribution-1 \\\"We are the first to point out CLIPScore\\u2019s reliability in text-guided image editing\\\"\\nIn fact, most researchers recognize this point, and use a cocktail metric, like FID + Clipscore + SSIM + human evaluation. \\n[a] Holistic Evaluation of Text-to-Image Models. NeurIPS. \\n\\n3. Conflicts in contribution-2 \\nIn abstract, the author said using MLLM. \\nIn introduction, the author claims LLM. \\n\\n4. Contribution-3 said \\\"demonstrates\\\" but there is no mathmatic proof. \\n\\n5. Figure 1 does not convince me. \\nWe could simply use cosine(f_source, f_edit) to see the preservation.\", \"https\": \"//openaccess.thecvf.com/content/WACV2024/papers/Tanjim_Discovering_and_Mitigating_Biases_in_CLIP-Based_Image_Editing_WACV_2024_paper.pdf\\n\\n6. Figure 2 is similar to Figure 1. \\nWe could simply use cosine(f_source, f_edit) to see the preservation. \\n\\n7. Eq.1 is not commonly-used. \\nCould you show the reference? It does not make sense, since clip feature can not use plus or minus operation. \\nMost cases I read is using cosine(f_modification text, f_editted image) \\n\\n8. One simple ablation is missing. \\nHow about the weighted sum like cosine(f_modification text, f_editted image) + 0.5*cosine(f_source image, f_editted image) ?\\ncosine(f_modification text, f_editted image) higher is better modification. \\ncosine(f_source image, f_editted image) higher is better preservation. \\nUsually, we will use the FID to indicate the preservation as well. \\n\\n9. I am confusing about Eq.3,4,5. \\nEq 4,5 is about a , not v.\\nEq 3 is about v, not a. \\nBut the author said a can control v. \\\"the refined version of v is obtained through hyperplane optimization using as and at.\\\"\", \"questions\": \"Please see the weakness.\\n\\n1. The key question is that we can use fusion of other existing metric without introducing a new one.\\n2. The formulation of clipscore is not common.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your thoughtful and detailed reviews. We appreciate your insights and would like to address the points raised:\\n\\n1. FID and LPIPS are metrics that do not take the target text into account at all, meaning they cannot evaluate whether an image has been modified in accordance with the target text. The reason we focused on analyzing CLIPScore (directional CLIP similarity) is that it is the only metric that jointly considers the source image and the target text.\\n\\n2. As demonstrated in Section 5.3, AugCLIP is adaptable to various editing scenarios. Furthermore, the benchmark datasets we used in the paper, as described in Appendix C.3 and detailed in Appendix A, encompass a wide range of contexts. This versatility allows for fine-grained evaluation of complex editing tasks through the use of descriptions generated by MLLMs.\\n\\nWe are grateful for your valuable feedback and will make the necessary revisions to enhance the clarity and quality of our paper. Thank you once again for your time and insights.\"}", "{\"summary\": \"This paper introduces AugCLIP, an evaluation metric designed for text-to-image editing tasks. AugCLIP aims to address limitations in CLIPScore, which can not evaluate the preservation of the original input image. The method leverages GPT-4V for detailed descriptions of the source and target images. By creating a \\\"modification vector\\\" based on source and target attributes, AugCLIP balances preservation and modification. The authors demonstrate that AugCLIP outperforms metrics such as LPIPS, CLIPScore on various datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors evaluate AugCLIP on multiple benchmarks, demonstrating that AugCLIP outperforms CLIPScore and LPIPS with various editing methods and datasets.\", \"AugCLIP can evaluate both the modification and preservation of the editing images. Compared to CLIPScore, AugCLIP is a more comprehensive metric.\", \"By leveraging GPT-4V, AugCLIP can evaluate more fine-grained differences between the ground truth image and the edited image.\", \"The authors provide ablation studies to evaluate difference components of AugCLIP.\"], \"weaknesses\": [\"It seems that the authors overclaimed their contributions. For example, in Line 081-083, they mentioned that \\\"We are the first to point out CLIPScore\\u2019s reliability in text-guided image editing\\\". As far as I know, many papers have pointed out the limitations of CLIPScore. Almost all image editing methods leverage CLIP to evaluate the modification, and LPIPS/FID to evaluate the preservation. For example, [1,2] provides both CLIP and LPIPS to evaluate the editability\\u2013fidelity tradeoff.\", \"The authors seem to confuse CLIP score with CLIP directional similarity score (*i.e.*, directional CLIP loss). From my understanding, the definition in Section 3.1 is more like CLIP directional similarity score rather than CLIP score. Please double check the definition of CLIP and CLIP similarity score in the following link:\"], \"https\": \"//huggingface.co/docs/diffusers/conceptual/evaluation.\\n\\n- The experiments only involves metrics like LPIPS, CLIP. Please consider include the tradeoff between CLIP and 1-PIPS or FID.\\n\\n- Introducing GPT-4V introduces additional overhead, which is not evaluated in the related experiments.\\n\\n[1] Zhang, Zhixing, et al. \\\"Sine: Single image editing with text-to-image diffusion models.\\\" CVPR 2023. \\n[2] Kawar, Bahjat, et al. \\\"Imagic: Text-based real image editing with diffusion models.\\\" CVPR 2023.\", \"questions\": \"Please refer to **Weaknesses**.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your thoughtful and detailed reviews. We appreciate your insights and would like to address the points raised:\\n\\n$\\\\textbf{Overclaim of contribution}$ We have not overclaimed the contribution, since we first analyzed the shortcomings of \\u201cdirectional CLIP similarity\\u201d specifically in the context of \\u201ctext-guided image editing\\u201d, not the shortcomings of \\\"CLIP-score\\\". As the terminology is confusing, we will revise the paper.\\n\\n$\\\\textbf{Experiments involve all suggested variants}$ Combination of existing metrics have been demonstrated in Appendix D.1. FID is not measured in a sample-wise manner, which is impossible be applied in our experiments. I hope this clarifies our experiments.\\n\\n$\\\\textbf{Addition overhead is already analyzed}$ Please refer to Appendix C.2. for additional computational overhead.\\n\\nWe are grateful for your valuable feedback and will make the necessary revisions to enhance the clarity and quality of our paper. Thank you once again for your time and insights.\"}", "{\"comment\": \"Thank you very much for your thoughtful and detailed reviews. We appreciate your insights and would like to address the points raised:\\n\\n1. In the ground truth test, we intentionally generated excessively preserved or modified samples using SD-1.5. However, for the Two-Alternative Forced Choice (2AFC) test, we manipulated images using the editing methods listed in Table 6 and created the survey based on these manipulated images. We will revise Section 4.3 to clarify this explanation further.\\n\\n2. The CLIPScore we define in our paper corresponds to directional CLIP similarity and is the only metric leveraging CLIP within the context of text-guided image editing. We acknowledge that this may have caused confusion, and we will make the necessary revisions to ensure clarity.\\n\\n3. Please refer to our response to point 1 for related details.\\n\\n4. Regarding Question 1, thank you for suggesting an excellent experimental idea. We will incorporate your proposed approach in future experiments.\\n\\n5. For Question 2, please refer to our response to point 2 for further clarification.\\n\\n6. For Question 3, details related to computation time can be found in Appendix C.2.\\n\\nWe are grateful for your valuable feedback and will make the necessary revisions to enhance the clarity and quality of our paper. Thank you once again for your time and insights.\"}", "{\"summary\": \"This paper proposed a novel evaluation metric, augclip, for text-guided image editing. Motivated by the observation that clip score biases towards modification instead of preservation, the authors utilized GPT-4v to rephrase the source prompt and target prompt to extract essential visual attributes in the form of text prompts. Then the authors trained classification models to classify source prompts from target prompts. Since CLIP itself aligns the image space and text space, the classification model trained with source prompts and target promtps could also be utilized to compute the minimum vector v that transfer source image embedding to target text embedding. The augclip metric is then defined to be the cosine similarity of image embedding of the edited image and the vector sum of v and source image embedding. The proposed augclip demonstrate superior alignment with human evalutions than clip score and lpips score.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The observation and visualization that clip score favors modifications instead of preservation is interesting.\\n\\n2. The idea to transfer the trained SVM from text space to image space and compute the minimum v from source image embedding to target prompt embedding is clever. \\n\\n3. the two-alternative forced choice testing and ground truth testing are reasonable.\", \"weaknesses\": \"1. Since the main contribution of this paper is the augclip evaluation metric, the authors should compare with more than one image editing method on each benchmark. However, the authors only evaluated the results of one image editing method, the results of stable diffusion 1.5 (which is just a text to image base model without editing capability itself) and the original reference image, with the proposed augclip. For a new evaluation metric, this is far from enough. For example, the authors showed the reference images from TEdBench [2] multiple times in the paper, yet they only evaluate the scores of Imagic+Imagen. There are other related works on this benchmark, for example, Forgedit[3] open-sourced their implementation and released their results on TEdBench on github.\\n\\n2. Incorrect clip score definition. The clip score in this paper, shown in equation 1 in section 3.1, is different from the usual clipscore being used in text-guided image editing literature [1]. For example, in Imagic[2] and Forgedit[3], the clip score metric's definition follows [1]. \\n\\n3. Most editing methods in table 6 in the appendix never appear in the paper and section 4.3 is not written well thus is very confusing.\\n\\n\\n\\n\\n[1] Clipscore: A referencefree evaluation metric for image captioning. In EMNLP\\n\\n[2] Imagic: Text-based real image editing with diffusion models. In CVPR\\n\\n[3] Forgedit: Text-guided Image Editing via Learning and Forgetting. In arxiv\", \"questions\": \"1. The main contribution of this paper is the new metric, augclip, for text-guided image editing. However, very few editing methods are tested with this new metric, which weakens the solidness of the metric. Considering the limited rebuttal period, I will not ask the authors to evaluate all editing methods in table 6 in the appendix. However, since the main benchmark illustrated in the paper is TEdBench[2], I suggest the authors to evaluate Forgedit[3] with augclip since they also released the complete editing results of TEdBench on github. You have to compare image editing methods with augclip to demonstrate its effectiveness instead of text-to-image models like stable diffusion itself in your paper.\\n\\n2. The clip score definition in equation 1 is different from the mainstream reference[1]. Where does this equation 1 come from? Why is it used instead of [1]?\\n\\n3. How long does it take to train the augclip metric on each benchmark? \\n\\n\\nI am willing to raise my rating score if the authors could stress my concerns in the revised version of this paper.\", \"references\": \"[1] Clipscore: A referencefree evaluation metric for image captioning. In EMNLP\\n\\n[2] Imagic: Text-based real image editing with diffusion models. In CVPR\\n\\n[3] Forgedit: Text-guided Image Editing via Learning and Forgetting. In arxiv\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thank you very much for your valuable reviews. We acknowledge that the mixed usage of CLIPScore and Directional CLIP similarity may have caused some confusion, and we plan to address this issue for better clarity. Additionally, we will make sure to explicitly define the scope of our comparisons, particularly in the context of text-guided image editing, to avoid any ambiguity. Your feedback is greatly appreciated, and we will strive to improve our work accordingly.\"}" ] }
0823rvTIhs
Weakly-Supervised Affordance Grounding Guided by Part-Level Semantic Priors
[ "Peiran Xu", "Yadong MU" ]
In this work, we focus on the task of weakly supervised affordance grounding, where a model is trained to identify affordance regions on objects using human-object interaction images and egocentric object images without dense labels. Previous works are mostly built upon class activation maps, which are effective for semantic segmentation but may not be suitable for locating actions and functions. Leveraging recent advanced foundation models, we develop a supervised training pipeline based on pseudo labels. The pseudo labels are generated from an off-the-shelf part segmentation model, guided by a mapping from affordance to part names. Furthermore, we introduce three key enhancements to the baseline model: a label refining stage, a fine-grained feature alignment process, and a lightweight reasoning module. These techniques harness the semantic knowledge of static objects embedded in off-the-shelf foundation models to improve affordance learning, effectively bridging the gap between objects and actions. Extensive experiments demonstrate that the performance of the proposed model has achieved a breakthrough improvement over existing methods.
[ "weakly supervised affordance grounding", "foundation model", "pseudo label" ]
Accept (Poster)
https://openreview.net/pdf?id=0823rvTIhs
https://openreview.net/forum?id=0823rvTIhs
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xPWS79GR8d", "rdNmDAW7MY", "qVZ0BTPSMf", "o9Z7fjBw5x", "mxDYUq9TNq", "iOXTF1mj3y", "e22a71bilE", "bFJOrJ2Zpv", "YzPRbZSGQX", "ULt1MhFkaA", "R1C3tjZ6Ck", "Qo4FPbFisC", "NbEvLKPZsA", "N3YdSZfj4N", "L1CNf6eSdX", "5PgSfQiuap" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "decision" ], "note_created": [ 1730693042916, 1732006630513, 1732325661409, 1732682631485, 1732795059446, 1732324421172, 1732654243126, 1732323959323, 1732326535548, 1730701794570, 1730466025928, 1732324934347, 1732673535314, 1734675162439, 1732326276844, 1737523469347 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1796/Reviewer_Yqjp" ], [ "ICLR.cc/2025/Conference/Submission1796/Reviewer_dPdq" ], [ "ICLR.cc/2025/Conference/Submission1796/Authors" ], [ "ICLR.cc/2025/Conference/Submission1796/Reviewer_SXnA" ], [ "ICLR.cc/2025/Conference/Submission1796/Authors" ], [ "ICLR.cc/2025/Conference/Submission1796/Authors" ], [ "ICLR.cc/2025/Conference/Submission1796/Reviewer_7PPq" ], [ "ICLR.cc/2025/Conference/Submission1796/Authors" ], [ "ICLR.cc/2025/Conference/Submission1796/Authors" ], [ "ICLR.cc/2025/Conference/Submission1796/Reviewer_7PPq" ], [ "ICLR.cc/2025/Conference/Submission1796/Reviewer_SXnA" ], [ "ICLR.cc/2025/Conference/Submission1796/Authors" ], [ "ICLR.cc/2025/Conference/Submission1796/Reviewer_Yqjp" ], [ "ICLR.cc/2025/Conference/Submission1796/Area_Chair_eaNR" ], [ "ICLR.cc/2025/Conference/Submission1796/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper addresses the task of weakly supervised affordance grounding (WSAG), where the goal is to identify affordance regions on objects using only image-level labels and human-object interaction images.\", \"the_key_contributions_include\": [\"A novel pseudo-supervised training framework and pipeline that leverages visual foundation models to generate affordance heatmaps, mapping affordance classes to object parts.\", \"Three key enhancements to improve performance:\", \"Label refinement using interaction cues\", \"Fine-grained object feature alignment with exocentric images\", \"Reasoning module for better generalization\", \"Extensive experiments demonstrating significant performance improvements over existing methods\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Clear writing and organization.\", \"Well-motivated technical approach with clear problem formulation.\", \"This paper propose a novel approach that uses visual foundation models and part-level semantic priors for WSAG, unleashing the power of these models for affordance learning.\", \"Using human occlusion cues for label refinement, which is an innovative insight.\", \"Comprehensive experimental validation and thoughtful analysis of limitations in existing methods.\"], \"weaknesses\": [\"Could benefit from more analysis of failure cases.\", \"The label refinement stage using human occlusion cues may be problematic when interactions are ambiguous or when multiple affordances exist.\", \"The mapping from affordance to part names is ad-hoc and manually crafted, which limits the scalability to new affordance types and more complex objects.\"], \"questions\": \"1. Could you provide more details about failure cases and limitations of the proposed approach?\\n2. How sensitive is the method to the results of VFM? How well can the refine state correct possible errors by VLpart and SAM?\\n3. How does the computational cost (training & inference) compare to existing CAM-based methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The proposed framework for weakly supervised affordance grounding (WSAG) uses pseudo-supervised learning to link affordance actions to object parts via part segmentation models and semantic cues. It generates and refines pseudo-labels by focusing on affordance-relevant regions with exocentric images, improving label accuracy and feature alignment. To enhance generalization, a lightweight reasoning module maps affordances to latent object part representations, enabling the model to handle unseen categories. By integrating semantic knowledge from foundation models, the framework transitions from weakly to pseudo-supervised learning, achieving a breakthrough in performance over prior methods\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The paper is clearly written and easy to follow.\\n2) The method is well-motivated, and the VFM-assisted pseudo-labeling should effectively address the challenges of the weakly-supervised setting.\\n3) The overall improvements over existing methods are quite significant.\", \"weaknesses\": \"My biggest concern lies in the experimental section. In Table 2, the reasoning model appears to negatively impact the baseline, and the other two design components only provide marginal improvements.\", \"questions\": \"Could the authors clarify why the baseline method in Table 2 outperforms existing state-of-the-art methods by such a significant margin? Based on the numbers in Tables 1 and 2, it seems the improvement over existing methods might primarily stem from a strong baseline, while the additional modules contribute only marginal benefits\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer 7PPq\", \"comment\": \"We sincerely thank the reviewer for the time and effort, and we are very grateful for the constructive and positive review. In response to the reviewer's suggestion, we have performed an additional ablation study on the visual encoder.\\n\\n| | | Seen | | | Unseen | |\\n| ------- | --------------- | ------------- | ------------- | --------------- | ------------- | ------------- |\\n| Encoder | KLD$\\\\downarrow$ | SIM$\\\\uparrow$ | NSS$\\\\uparrow$ | KLD$\\\\downarrow$ | SIM$\\\\uparrow$ | NSS$\\\\uparrow$ |\\n| CLIP | 0.938 | 0.503 | 1.477 | 1.256 | 0.428 | 1.346 |\\n| DINO | 0.945 | 0.506 | 1.473 | 1.274 | 0.415 | 1.298 |\\n| DINOv2 | **0.894** | **0.511** | **1.538** | **1.191** | **0.434** | **1.363** |\\n| OWL-ViT | 0.957 | 0.491 | 1.451 | 1.239 | 0.416 | 1.334 |\\n| SAM | 0.999 | 0.482 | 1.421 | 1.304 | 0.394 | 1.253 |\\n\\nFor each encoder, we use the ViT-B version to make a fair comparison. It can be observed that all the evaluated encoders achieve performance surpassing previous CAM-based methods. Specifically, the default choice (CLIP) slightly outperforms DINO and OWL-ViT. This could be attributed to the use of CLIP encoder in the text branch, which enables the affordance query to interact more effectively with the visual features in CLIP's latent space. SAM, on the other hand, exhibits a larger performance gap compared to CLIP. One possible explanation is that SAM's encoder does not incorporate a class token, which hinders the cross-modal fusion (detailed in Section 3.2 and Appendix A.1) from extracting high-level semantic information. Conversely, DINOv2 achieves better results than the default option, likely owing to the favorable properties emerged from self-supervised representation learning. This also aligns with DINOv2's strong performance in dense recognition tasks.\\n\\nIn summary, our pipeline is well compatible with different visual encoders. As our goal is to establish a general framework that leverages foundation models for the affordance task, selecting the optimal visual encoder falls beyond the scope of this paper. However, we do observe that more advanced encoders yield superior overall performance, and we appreciate the reviewer's insightful remarks in this regard.\"}", "{\"comment\": \"Thanks for the authors' response. Some of my concerns are addressed while my concerns on the significance of the combination of the three components (i.e., weaknesses 1) and question 1 remain. I would like to keep my original score.\"}", "{\"title\": \"Additional Author Response to Reviewer SXnA\", \"comment\": \"We sincerely thank the reviewer for the feedback. Here we provide additional responses to the unresolved concerns.\\n\\n\\n**Significance of combining the three components** (Weakness 1)\\n\\nIn addition to the intuitive benefits we analyzed in the previous response, we would also quantitatively emphasis the significance of combining all three modules. Below is a brief summary for the information provided in Table 2 of the main text, where the best-performing single module (i.e., the alignment module) is compared with the combination. The results are averaged across the seen and unseen splits.\\n\\n| | KLD$\\\\downarrow$ | SIM$\\\\uparrow$ | NSS$\\\\uparrow$ |\\n| ------------------ | ----------------- | ----------------- | ----------------- |\\n| baseline | 1.097 | 0.466 | 1.412 |\\n| +alignment module | 1.050 (-4.3\\\\%) | 0.472 (+1.4\\\\%) | 1.446 (+2.5\\\\%) |\\n| +all three modules | **1.022 (-6.9\\\\%)** | **0.474 (+1.7\\\\%)** | **1.482 (+5.0\\\\%)** |\\n\\nIt can be observed that the full model achieves a 6.9\\\\% improvement on KLD and a 5.0\\\\% improvement on NSS over the baseline. Nonetheless, using any individual module leads to at most 4.3\\\\% and 2.5\\\\% improvement, respectively. We argue that these performance gaps imply non-trivial gains.\\n\\n**About cross-view alignment** (Question 1)\\n\\nWe fully agree with the reviewer that cross-view learning is common, but strongly argue that it does not serve as a basis to reject this work. Our previous response has elaborated on the differences between view alignment in the literature of self-supervised learning and our interested task. Here we would further clarify that the alignment of egocentric / exocentric images is actually a default setting in weakly supervised affordance grounding (Lines 84-91, Lines 266-275). Though the alignment per se does not imply any technical contribution, designing the scheme of extracting most informative bits from exocentric images is non-trivial. We believe that the proposed foundation model-based scheme (Section 3.4), as validated in Appendix D.3 (Table 8), makes solid and novel addition to this task, and we hope the reviewer could re-evaluate this work with this clarification.\\n\\n\\n**The relationship between cross-view alignment and pseudo labels** (Question 1)\\n\\n\\\"Aligning egocentric and exocentric images\\\" and \\\"utilizing pseudo labels\\\" are two orthogonal ideas in affordance grounding. This work explored both ideas in the proposed model, and each of them proves to be clearly beneficial.\\n\\nWe appreciate the time and effort devoted to reviewing this paper, and we always welcome further discussions.\"}", "{\"title\": \"Author Response to Reviewer Yqjp (Part 1)\", \"comment\": \"We sincerely appreciate the reviewer's constructive and thorough feedback. Below, we provide detailed responses to the reviewer's comments.\\n\\n**Failure cases analysis and limitations** (Weakness 1 \\\\& Question 1)\\n\\nWe present some examples of the failure cases in Figure 12 in the Appendix. As detailed in the caption of Figure 12, we observe two kinds of typical failure cases. First, for some small or intricate affordance regions (e.g. the part of a suitcase that affords dragging), the pseudo labels, even after refinement, are not accurate enough. Second, for some objects in the unseen test set that exhibit significant differences in shape or structure compared to the training objects (e.g., holding a cup vs. holding a wine glass), the model's generalization ability still requires improvement.\\n\\nWe kindly refer the reviewer to Appendix E for our discussion on limitations and future directions. In brief, possible improvements include: incorporating external knowledge to enhance reasoning capabilities on novel categories, introducing finer-grained image correlations to improve the alignment module, scaling up the dataset size, and establishing a more principled taxonomy for affordance.\\n\\n**Problem of the label refinement stage** (Weakness 2)\\n\\nThe reviewer is correct that the label refinement process may meet difficulty when faced with complex affordances and interactions. In the refinement stage, we aim to focus on the affordance categories that are related to human body, especially hands. Exocentric images of these categories (e.g. hold, hit, drag) usually contain clear human-object occulusions that are informative for our model. It is also worth noting that hand-related affordances hold particularly significant practical value for current research in embodied intelligence, as they are directly related to object manipulation.\\n\\n**Ad-hoc part name mapping** (Weakness 3)\\n\\nAs stated in Line 240-241, the manually constructed part name mapping is sufficient for the experiments on AGD20K, while implementing an automatically generated mapping is also feasible. In Table 5 (in Appendix), we present some examples of such a mapping generated by GPT-4o and the prompt we use. The results align well with the handcrafted mapping.\"}", "{\"comment\": \"Thanks for the detailed experiments! That totally makes sense to me.\"}", "{\"title\": \"Author Response to Reviewer SXnA\", \"comment\": \"We are grateful to the reviewer for the valuable insights and suggestions. In the following, we provide point-by-point responses to each concern.\\n\\n**Marginal improvements of the modules** (Weakness 1)\\n\\nThe performance gain brought by the combination of the three modules is indeed relatively modest. However, we would like to emphasize the complementary nature of the proposed modules. As discussed in Section 4.4, the reasoning module primarily improves the performance on the unseen split (due to enhanced generalization ability), the label refinement stage is particularly beneficial for the seen split (as the refined labels directly contribute to a better understanding of seen object categories), and the alignment loss benefits both splits (due to the robustness brought by introducing exocentric images). From the results in Table 2, each module demonstrates its intended effect, supporting the validity of our design. Moreover, by employing all three modules together, the model can achieve consistent and significant improvements over our baseline model across both splits. Therefore, we believe the design of these three modules is well-justified. As a whole they can make full use of the hints from the exocentric images, the textual input, and the pseudo labels.\\n\\n**Effects of foundation model aided alignment stage** (Weakness 2 \\\\& Question 2)\\n\\nWe provide in-depth ablation study on the alignment process in Appendix D.3, and the results are shown in Table 8. We copy the table here for ease of reference.\\n\\n| | KLD$\\\\downarrow$ | SIM$\\\\uparrow$ | NSS$\\\\uparrow$ |\\n| ------------------------ | --------------- | ------------- | ------------- |\\n| Ours-baseline | 1.256 | 0.428 | 1.346 |\\n| +alignment w.o. obj mask | 1.259 | 0.422 | 1.312 |\\n| +alignment w. obj mask | **1.176** | **0.437** | **1.407** |\\n\\n\\nBy examining the effect of using the exocentric object masks ($M^{\\\\text{exo}}_{\\\\text{obj}}$) in\\nthe alignment process, we found that the cross-view feature alignment is not helpful without the masks. Focusing on the object area has a significant advantage over globally pooling the exocentric feature maps, since it excludes the irrelavent information from the alignment process.\\n\\nIn Appendix D.3, we have also conducted ablation studies on the design of the reasoning module, the post-processing strategy of the refinement stage, the choice of the encoder (based on Reviewer 7PPq's suggestion), and the choice of the VFM (based on Reviewer Yqjp's suggestion). Please let us know if there are further concerns.\\n\\n**Need of exocentric images** (Weakness 3)\\n\\nThe proposed method indeed relies on exocentric images. However, we believe that unlabeled exocentric images, which do not need to be strictly paired with egocentric images (i.e., they may have different object instances), are relatively easy to obtain, especially when compared to pixel-level affordance annotations. For example, such images can be sourced from the internet (by searching for \\\"\\\\[affordance category\\\\] \\\\[object category\\\\]\\\"), video datasets capturing human activities, or generative models.\\n\\n**Relation with cross-view alignment in feature learning** (Question 1)\\n\\nTo the best of our knowledge, alignment between multiple views is a common strategy in self-supervised learning (e.g., SimCLR[1], DINO[2]). These works leverage the consistency between different augmented views of the same image to learn high-quality visual features. In the context of our task, however, the egocentric/exocentric views (as defined by Cross-View-AG[3]) are unrelated to augmentations. Instead, they refer to two distinct images containing objects of the same category but in different states. Consequently, the view alignment in feature learning is fundamentally different from our alignment process. While the former focuses on consistency under perturbations, the latter aims to transfer specific knowledge from the exocentric view to the egocentric view.\\n\\n[1] A simple framework for contrastive learning of visual representations. ICML 2020\\n\\n[2] Emerging Properties in Self-Supervised Vision Transformers. ICCV 2021\\n\\n[3] Learning affordance grounding from exocentric images. CVPR 2022\\n\\n\\n\\nWe hope the responses above have addressed the reviewer's concerns, and we are always open to further dicussions.\"}", "{\"title\": \"Author Response to All Reviewers\", \"comment\": \"We would like to thank all the reviewers for the efforts they devoted and the suggestions they gave. In the rebuttal, we have provided responses and clarifications regarding the effect of the proposed modules, detailed ablation studies, computational cost, failure cases analysis, task setup, and connections to related topics. Meanwhile, we have made the following modifications to the paper (highlighted in red):\\n- A new paragraph in Appendix A.4 describing the computational cost of training.\\n- Two new experiments in Appendix D.3 on the choice of visual encoders and foundation models. (With the help of more advanced encoder and foundation models, we obtain even stronger performance than the originally reported ones!)\\n\\nWe hope that these contents could address the reviewers' concerns and present our work more clearly. If we have not fully addressed your questions or if you have any further inquiries, please do not hesitate to contact us.\"}", "{\"summary\": \"This paper tackles weakly supervised affordance grounding (WSAG) by leveraging foundation models to generate pseudo labels, departing from previous CAM-based approaches. The authors propose a three-stage pipeline: (1) using VLpart and SAM to generate initial pseudo labels by mapping affordance-object pairs to part names, (2) refining these labels using human-object interaction cues from exocentric images, and (3) training an affordance grounding model with the refined pseudo labels. The method also includes cross-view feature alignment and a reasoning module to handle unseen objects. The approach shows significant improvements over existing WSAG methods\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The problem is important and well-motivated, as affordance grounding is crucial for robotic manipulation and human-object interaction understanding\", \"The proposed pseudo-labeling approach effectively leverages existing foundation models (VLpart, SAM) to provide supervision, addressing limitations of previous CAM-based methods\", \"The label refinement process using exocentric images is novel and well-designed, providing a clever way to improve initial pseudo labels\", \"The reasoning module helps generalize to unseen objects, which is crucial for practical applications\", \"The writing is clear and the method is well-explained with appropriate visualizations\"], \"weaknesses\": \"The choice of CLIP as the vision encoder could be better justified given previous work suggesting limitations (vs DINO, OWLViT, SAM). For example, the paper will be stronger with an ablation study of different visual encoders.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper propose a weakly supervised affordance grounding framework. It uses off-the-shelf foundation models to generate pseudo labels of object parts. To further improve the performance, a label refining strategy, a fine-grained feature alignment process, and a lightweight reasoning module are introduced. Experiments show promising results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Training affordance grounding models with object labels is an interesting question.\\n2. Using off-the-shelf foundation models to generate affordance label is an interesting idea.\\n3. Experiments show promising results.\", \"weaknesses\": \"1. As shown in the ablation study table 2, the improvements of using all these three modules look marginal over using one module. It seems that the effectiveness of the three components are not significant.\\n2. In section 3.4, the authors propose to align the features of exo- and egocentric images after SAM segmentation while the existing methods directly align the features of the two images. However, there is no solid experiments to show the effectiveness of this design.\\n3. The framework refines the affordance labels with the need of the corresponding exocentric image which may not be available sometimes.\", \"questions\": \"1. Aligning the features of an object from different views is a commonly used strategy for feature learning. How is this strategy related to pseudo label generation and refinement.\\n2. Some designs need more detailed ablation studies. E.g., how does the proposed fine-grained feature alignment process with SAM perform when compared with the previous work aligning the features directly. Is there any significant performance difference?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer Yqjp (Part 2)\", \"comment\": \"**Sensitivity to VFMs and effect of the refinement stage** (Question 2)\\n\\nWe would like to address the sensitivity to VFMs from the following perspectives.\\n\\n(1) It is important to choose the right type of VFMs in the first place.\\n\\nIn Appendix B.1, we explore three different ways of using VFMs for pseudo label generation, including directly using a multimodal LLM, directly using a foundation model specialized for detection, and combining a detection model with label mapping. From the results in Figure 6 and 7, we observe that only the third approach can produce reasonable labels in most cases. Therefore, the first step in using VFMs is to design a proper pipeline to translate its semantic priors into affordance-related information.\\n\\n(2) The errors in the initial pseudo labels can be fixed.\\n\\nGiven the relatively high-quality pseudo labels, our approach can correct the errors within them through two mechanisms, as shown in Figure 9 and 10 in the Appendix. On one hand, the supervised training pipeline will fix some **occasional errors**. Though the pseudo labels may miss some instances or deviate from the correct region, the trained model can make correct predictions because a lot of samples in the same category have correct labels (Figure 9). On the other hand, the label refinement stage will deal with some **systematic errors**, i.e. the cases where the VFM produces incorrect labels for the majority of images within certain category. In Figure 10 we visualize the progress made by the refinement stage. These results effectively demonstrate the robustness of our approach to the errors of VFMs.\\n\\n\\n(3) The overall performance will increase with the capability of the VFMs.\\n\\nOur pipeline is not tied with VLpart and SAM, and they can be replaced by similar models. To validate this, we perform an additional ablation study on the choice of VFMs for label generation.\\n\\n| | | | Seen | | | Unseen | |\\n| -------- | ------- | --------------- | ------------- | ------------- | --------------- | ------------- | ------------- |\\n| det. | seg. | KLD$\\\\downarrow$ | SIM$\\\\uparrow$ | NSS$\\\\uparrow$ | KLD$\\\\downarrow$ | SIM$\\\\uparrow$ | NSS$\\\\uparrow$ |\\n| VLpart | FastSAM | 0.976 | 0.482 | 1.473 | 1.219 | 0.420 | 1.344 |\\n| VLpart | SAM | 0.890 | 0.510 | 1.547 | 1.153 | 0.437 | 1.418 |\\n| PartGLEE | SAM | **0.863** | **0.538** | **1.622** | **1.084** | **0.460** | **1.537** |\\n\\nHere, VLpart+SAM is our default choice. FastSAM [1] is a YOLO-based lightweight segmentation model which claims 50\\u00d7 higher speed than SAM. PartGLEE [2] is a very recent work following the task setting of VLpart. It can be observed that our approach achieves better performance with stronger foundation models. Specifically, using PartGLEE+SAM, the model establishes a new state of the art. Even when using the weaker VLPart+FastSAM, the model still outperforms previous CAM-based methods. We believe this is an encouraging sign, indicating that the development of general vision models will continue to drive progress in the field of affordance grounding.\\n\\nThis experiment will be added to Appendix D.3 in our revised version.\\n\\n[1] Fast Segment Anything. https://arxiv.org/pdf/2306.12156\\n\\n[2] PartGLEE: A Foundation Model for Recognizing and Parsing Any Objects. ECCV 2024\\n\\n**Computational cost analysis** (Question 3)\\n\\nWe provide an analysis on the efficiency of our model in Appendix A.4, and the model statistics are listed in Table 3 and 4. In brief, our model has comparable inference speed (\\\\~49 fps) with previous methods like Cross-View-AG (\\\\~52 fps) and WSMA (\\\\~30 fps), and the computation is mainly concentrated at the visual encoder.\\n\\nAs for training, it takes about 1h to generate the initial pseudo labels for the training set. The label refinement stage requires approximately 1h, followed by around 3h for supervised training. The whole training process can be performed on a single NVIDIA GeForce RTX 2080Ti. As reference, LOCATE's training scheme takes about 7h in the same environment, while WSMA takes about 2h. So the training cost of our method is basically at the same level with previous methods. Besides, our supervised training process is more straightforward than previous works, involving neither the clustering operations used by LOCATE nor the non-negative matrix factorization employed in WSMA. (This paragraph will be added to Appendix A.4 in our revised version.)\\n\\n\\n\\nWe hope the responses above have addressed the reviewer's concerns, and we are always open to further dicussions.\"}", "{\"comment\": \"Thank you for the detailed response, I will keep my score.\"}", "{\"metareview\": [\"This paper introduces a VFM-assisted pseudo-labeling method to address the weakly-supervised affordance grounding task. The proposed approach incorporates three key modules: (1) a label refining strategy using exocentric images, (2) a fine-grained feature alignment process with exocentric images, and (3) a lightweight reasoning module. These modules collectively improve performance on the task. The use of exocentric images in both label refining and feature alignment is particularly novel and provides valuable insights. The primary concern is the marginal improvement in performance relative to the baseline method.\", \"Initial reviewer concerns focused on several aspects of the proposed method, including:\", \"The superior performance of the baseline method (dPdq) and the marginal improvements from the proposed modules (dPdq, SXnA)\", \"The ablation study on different visual encoders (7PPq)\", \"Failure cases and limitations (Yqjp)\", \"Sensitivity to VFM results (Yqjp)\", \"The ad-hoc nature of the affordance-to-part names mapping (Yqjp)\", \"Computational complexity (Yqjp)\", \"The necessity of exocentric images (SXnA)\", \"The novelty of cross-view alignment and its relationship to pseudo-label generation (SXnA)\", \"The authors have actively engaged with each of these concerns in their rebuttal. Most reviewers acknowledged that their issues had been addressed, leading to a positive shift in ratings: 8, 6, 6, and 5. However, Reviewer SXnA still expressed reservations regarding the marginal improvements of the proposed modules and the relationship between cross-view alignment and pseudo-label refinement.\", \"After carefully reviewing the paper, the reviewers' comments, and the authors' responses, the AC agrees with reviewers dPdq, 7PPq, and Yqjp that the VFM-assisted pseudo-labeling method is both effective and novel. The integration of the three proposed modules on top of the pseudo-labeling baseline leads to consistent performance improvements. Regarding Reviewer SXnA\\u2019s concern, the AC agrees with the authors\\u2019 explanation that cross-view alignment and pseudo-label refinement are distinct processes and not directly related.\", \"Given the positive ratings from most reviewers and the resolution of concerns - particularly regarding the novelty and effectiveness of the proposed method - the AC recommends accepting this paper for publication.\"], \"additional_comments_on_reviewer_discussion\": [\"Initial reviewer concerns focused on several aspects of the proposed method, including:\", \"The superior performance of the baseline method (dPdq) and the marginal improvements from the proposed modules (dPdq, SXnA)\", \"The ablation study on different visual encoders (7PPq)\", \"Failure cases and limitations (Yqjp)\", \"Sensitivity to VFM results (Yqjp)\", \"The ad-hoc nature of the affordance-to-part names mapping (Yqjp)\", \"Computational complexity (Yqjp)\", \"The necessity of exocentric images (SXnA)\", \"The novelty of cross-view alignment and its relationship to pseudo-label generation (SXnA)\", \"The authors have actively engaged with each of these concerns in their rebuttal. Most reviewers acknowledged that their issues had been addressed, leading to a positive shift in ratings: 8, 6, 6, and 5. However, Reviewer SXnA still expressed reservations regarding the marginal improvements of the proposed modules and the relationship between cross-view alignment and pseudo-label refinement.\", \"After carefully reviewing the paper, the reviewers' comments, and the authors' responses, the AC agrees with reviewers dPdq, 7PPq, and Yqjp that the VFM-assisted pseudo-labeling method is both effective and novel. The integration of the three proposed modules on top of the pseudo-labeling baseline leads to consistent performance improvements. Regarding Reviewer SXnA\\u2019s concern, the AC agrees with the authors\\u2019 explanation that cross-view alignment and pseudo-label refinement are distinct processes and not directly related.\"]}", "{\"title\": \"Author Response to Reviewer dPdq\", \"comment\": \"We sincerely appreciate the reviewer for providing a prompt and thorough emergency review. Here is our response to the reviewer's concerns.\\n\\n**Clarification of the baseline**\\n\\nThe \\\"baseline\\\" method in Table 1 refers to supervised training with the initial pseudo labels generated by the foundation models, i.e., the method described in Section 3.3. We would like to make it clear that the strong baseline itself is part of our contributions. Its improvement over previous methods is primarily attributed to the shift from CAM-based prediction to supervised training using pseudo labels. As mentioned in Line 77-84, the CAM is focused on the most discriminative part of the image, while it may fail to accurately capture the affordance region associated with an action. In contrast, the pseudo labels generated by the foundation models are of higher quality, and enable a consistent pipeline for training and inference (as shown in Figure 1(c)).\\n\\n**Improvements of the proposed modules**\\n\\nIn addition to the baseline method, the other part of our contribution lies in the design of the three extra modules. We would like to emphasize that these modules have necessary and complementary effects, as detailed in Section 4.4. The reasoning module is designed for improving generalization. Thus, as observed by the reviewer, it does not show advantage on the seen split (better SIM but worse KLD and NSS), where the model has encountered all kinds of objects during training and the test set does not demand cross-category generalization. On the unseen split, where generalization ability is essential, including the reasoning loss consistently enhances the performance on all metrics. Similarly, the refinement stage has more significant effect for the seen split, since the refined labels directly contribute to a better understanding of seen categories. The cross-view alignment works well for both splits, showing the benefits of introducing exocentric images to guide feature learning.\\n\\nAlso, from a more quantitative perspective, the improvements achieved by incorporating the three modules into the baseline model (e.g., -0.05 KLD on the Seen split and -0.10 KLD on the Unseen split) are comparable to the improvements brought by prior methods (e.g., Cross-View-AG+ vs. Cross-View-AG, WSMA vs. LOCATE). Overall, we believe that the design of these modules is necessary and meaningful. \\n\\nWe hope the content above addresses the reviewer's concerns regarding our experimental results, and we are always open to further dicussions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
07yvxWDSla
Synthetic continued pretraining
[ "Zitong Yang", "Neil Band", "Shuangping Li", "Emmanuel Candes", "Tatsunori Hashimoto" ]
Pretraining on large-scale, unstructured internet text enables language models to acquire a significant amount of world knowledge. However, this knowledge acquisition is data-inefficient---to learn a fact, models must be trained on hundreds to thousands of diverse representations of it. This poses a challenge when adapting a pretrained model to a small corpus of domain-specific documents, where each fact may appear rarely or only once. We propose to bridge this gap with synthetic continued pretraining: using the small domain-specific corpus to synthesize a large corpus more amenable to learning, and then performing continued pretraining on the synthesized corpus. We instantiate this proposal with EntiGraph, a synthetic data augmentation algorithm that extracts salient entities from the source corpus and then generates diverse text by drawing connections between those entities. Synthetic continued pretraining with EntiGraph enables a language model to answer questions and follow generic instructions related to the source documents without access to them. If the source documents are instead available at inference time, we show that the knowledge acquired through our approach compounds with retrieval-augmented generation. To better understand these results, we build a simple mathematical model of EntiGraph, and show how synthetic data augmentation can "rearrange" knowledge to enable more data-efficient learning.
[ "large language model", "synthetic data", "continued pretraining" ]
Accept (Oral)
https://openreview.net/pdf?id=07yvxWDSla
https://openreview.net/forum?id=07yvxWDSla
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yd8tFGWyJx", "xcpIwMrAGn", "xIfKUXhfmg", "rxk0sf19Zq", "qUF3938HfW", "eLFp6lBtWi", "eEZ1z1zkVr", "XuTj1BqoMv", "TrbgZIJQ9C", "OSxVhZp9Cy", "L9qNiFv4iv", "IABYnYr3bG", "6kOhXsZbnr", "5lDBbDNEH3", "3rGTGnG3IC" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732225891580, 1732225833273, 1732226069812, 1730361166068, 1730559067227, 1732558521762, 1734617559824, 1732613172154, 1732523438812, 1737523699759, 1732725318966, 1730157910400, 1732226119710, 1730670144661, 1732225970771 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5336/Authors" ], [ "ICLR.cc/2025/Conference/Submission5336/Authors" ], [ "ICLR.cc/2025/Conference/Submission5336/Authors" ], [ "ICLR.cc/2025/Conference/Submission5336/Reviewer_wsFi" ], [ "ICLR.cc/2025/Conference/Submission5336/Reviewer_ipvv" ], [ "ICLR.cc/2025/Conference/Submission5336/Reviewer_nNRV" ], [ "ICLR.cc/2025/Conference/Submission5336/Area_Chair_gbTc" ], [ "ICLR.cc/2025/Conference/Submission5336/Reviewer_wsFi" ], [ "ICLR.cc/2025/Conference/Submission5336/Reviewer_ipvv" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5336/Reviewer_9zM9" ], [ "ICLR.cc/2025/Conference/Submission5336/Reviewer_nNRV" ], [ "ICLR.cc/2025/Conference/Submission5336/Authors" ], [ "ICLR.cc/2025/Conference/Submission5336/Reviewer_9zM9" ], [ "ICLR.cc/2025/Conference/Submission5336/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer 9zM9,\\n\\nThanks for your hard work and helpful feedback\\\\! Below we address your specific comments as best as we can, and we hope you will engage with us during the discussion period to clarify any remaining points.\\n\\n## Experiments on Other Domain-Specific Corpora\\n\\nThank you for your suggestion of additional evaluation on domain-specific corpora in a more complex field. We have conducted an experiment using lecture transcripts and the Coursera Exam QA dataset and discuss these results in the general comment.\\n\\nWe hope that our response has adequately addressed your concerns about generalization of Synthetic CPT to other domains.\"}", "{\"title\": \"Response to all reviewers\", \"comment\": \"We thank the reviewers for their hard work and detailed feedback.\\n\\nWe were glad that you support our motivation, mentioning that \\u201cthe problem\\u2026 is important\\u201d (9zM9) and that the EntiGraph approach is \\u201cwell-motivated\\u201d (wsFi). Further, we are happy you found that our method is \\u201cclean\\u201d (ipvv) and that the theoretical model \\u201cprovides insights into the mechanics of synthetic \\\\[data\\\\]\\u201d (9zM9), \\u201caligns well with empirical observations and provides a deeper understanding of \\\\[EntiGraph\\u2019s\\\\] scaling properties\\u201d (wsFi), and \\u201cprovides some good intuition\\u201d (nNRV).\\n\\nWe were also glad to see you highlight our empirical evaluation, mentioning that the experiments/evaluations are \\u201cfairly convincing\\u201d (ipvv), \\u201cconvincing\\u201d (nNRV), and \\u201ccomprehensive\\u201d (wsFi), and the downstream performance improvements are \\u201cnotable\\u201d, \\u201cclear\\u201d, and \\u201csignificant\\u201d (wsFi). Lastly, we are excited that the paper was \\u201cclear and well-written\\u201d (9zM9) and \\u201cclear and easy to read\\u201d (nNRV).\\n\\nBased on your feedback, we **conducted three additional experiments to better understand EntiGraph** and its properties. **Full results, setup details, and discussion are in Appendix I** (last 3 pages) of the manuscript, and we will add pointers in the main text. We summarize the results below:\\n\\n## Rebuttal Experiment 1: Ablation with weaker synthetic data generator (Appendix I.1) \\nFirst, based on the feedback of ipvv, wsFi, and nNRV, we conducted experiments **replacing the strong GPT-4-Turbo-based synthetic data generator with a weaker Llama 3.1 8B Instruct model**. This accounts for the concern that the gains of synthetic CPT arise from distilling a stronger model. \\n\\nWe observe consistent gains with the Llama 3.1 8B generator up to 334M synthetic tokens. Moreover, the slope of the scaling trend is similar to EntiGraph\\u2019s trend using the GPT-4-Turbo generator. In contrast, the log-linear slope is smaller for the Rephrase baseline (with Llama 3.1 8B Instruct, the same weaker synthetic data generator) as in our main scaling results in Figure 2 (Section 4.2). Altogether, this experiment demonstrates that **the gains of Synthetic CPT with EntiGraph are significant and reproducible even using standard open-source models**.\\n\\n## Rebuttal Experiment 2: Factuality and lexical diversity of generated synthetic data (Appendix I.2) \\nSecond, reviewers ipvv, wsFi, and nNRV suggested we obtain quantitative, intrinsic measures of our synthesized corpora. We conducted a **human evaluation of the factuality** of the EntiGraph corpus, and **measured lexical diversity by computing n-gram overlap statistics** between EntiGraph- and Rephrase-generated text and the source documents.\", \"factuality\": \"we have paper authors annotate random sentences from EntiGraph and find that **94.4% of non-subjective sentences are factually supported by the source document**. Contrasting with GPT-4\\u2019s poor closed-book performance on QuALITY (\\\\~51%), this experiment shows that grounding with source documents helps prevent hallucination.\", \"lexical_diversity\": \"for both EntiGraph and Rephrase-generated text, we find that only a small percentage of n-grams are exactly copied from the source documents. For example, less than \\\\~0.5% of 8-grams and \\\\~0.2% of 16-grams in the EntiGraph and Rephrase synthetic corpora appear in the source documents. This suggests that both methods produce lexically diverse knowledge representations.\\n\\n## Rebuttal Experiment 3: New dataset beyond QuALITY (Appendix I.3) \\nLastly, reviewers 9zM9, wsFi, and nNRV suggest that we **test our method on another domain**, ideally in a \\u201ccomplex field\\u201d (9zM9) or a \\u201charder topic\\u201d (nNRV). To that end, we **conducted scaling experiments using the Coursera Exam QA dataset \\\\[1\\\\]**, which contains 15 lecture transcripts and exam questions from advanced technical courses. **This** **setting is even more data-scarce, with only 124K raw tokens** (compared to 1.3M in QuALITY). We find that EntiGraph consistently delivers log-linear improvement in accuracy, outperforming the base model and Rephrase baseline.\\n\\n**Altogether, these carefully constructed evaluations show that Synthetic CPT robustly enables an LM to learn from niche domains, and that these gains arise from diverse representations of knowledge rather than distillation.**\\n\\nThanks again for the helpful feedback. We hope to engage with you to clarify any remaining points\\\\!\\n\\n\\\\[1\\\\] An et al., 2023\\\\. L-Eval: Instituting Standardized Evaluation for Long Context Language Models.\"}", "{\"comment\": \"Dear Reviewer wsFi,\\n\\nThanks for your hard work and helpful feedback! Below, we address your specific comments as best as we can, and we hope you will engage with us during the discussion period to clarify any remaining points.\\n\\n## Evaluation with Other Datasets from Diverse Domains\\n\\nThank you for your suggestion to show that Synthetic CPT works on other datasets and domains. We have conducted an experiment using lecture transcripts and the Coursera Exam QA dataset and discuss these results in the general comment. \\n\\n## Quantitative Evaluation of Hallucination Rates in Generated Text\\n\\nThanks for suggesting that we quantitatively measure the hallucination rate of the generated text. We have measured the factuality and lexical diversity of our synthetic corpora and discuss these results in the general comment.\\n\\n## Demonstrating Synthetic CPT with an Open-Source Synthetic Data Generator\\n\\nWe agree that demonstrating Synthetic CPT works with a weaker, open-source synthetic data generator would be useful. We have performed this experiment and discuss our results in the general comment.\\n\\n## Alternative Data Augmentations\\n\\nTo the best of our knowledge, our work is the first to propose an augmentation specifically designed for learning in a data-constrained setting. Therefore, there are not many established baselines, leading us to adapt the prompts of WRAP [1] to our setting. We believe it is exciting future work to design and benchmark new augmentations for this setting. However, given the cost of end-to-end experiments, we believe this is out-of-scope for this paper.\\n\\n## Comparison to Manually Curating Data\\n\\nWe are specifically interested in learning from corpora with niche knowledge, such as rare articles/books or cutting-edge technical content. This knowledge is by construction not well-represented in standard internet text. Moreover, some of the reading comprehension questions in our test set refer directly to the source document (\\u201cwhat did X say about the topic\\u201d), and therefore the baseline of Raw CPT on source documents could be viewed as a form of strong data curation baseline. \\n\\n## Ambiguous or Domain-Specific Entities\\n\\nWe believe that our existing experiments with QuALITY already demonstrate that EntiGraph works well with domain-specific entities, because QuALITY is composed of niche narratives with unique characters, objects, or concepts (e.g., entities such as \\u201cethergram\\u201d, \\u201cSatterfield City\\u201d, and \\u201cspasticizer\\u201d). Moreover, our experiments with Coursera more directly test EntiGraph\\u2019s effectiveness with real-world, specialized entities from technical content.\\n\\nThank you again for your thorough suggestions to help improve the paper. We hope that our response has adequately addressed your concerns, such as generalization to other datasets, hallucination in the synthetic data, and use of a strong data generator.\\n\\nWe would greatly appreciate it if you could engage with us during the discussion period on any remaining barriers to raising your score.\\n\\n[1] Maini et al, 2024. Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling. https://arxiv.org/abs/2401.16380\"}", "{\"summary\": \"The paper proposes \\\"synthetic continued pretraining\\\" to enhance language models' domain adaptation efficiency, particularly when adapting to a small corpus of domain-specific documents. The authors introduce EntiGraph, a synthetic data augmentation algorithm that constructs a knowledge graph from the entities in the original corpus and synthesizes diverse text to enhance the pretraining process. The approach was evaluated using a reading comprehension dataset, QuALITY, showing notable improvements in model performance with both closed-book and open-book scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed EntiGraph approach for generating synthetic data is well-motivated and demonstrates clear improvements in downstream performance, as shown by the experimental results.\\n2. The paper includes comprehensive evaluations, including closed-book QA, instruction following, and open-book settings. The results show a significant performance improvement over baselines, validating the effectiveness of synthetic pretraining.\\n3. The authors provide a theoretical analysis of EntiGraph's effectiveness, which aligns well with empirical observations and provides a deeper understanding of its scaling properties.\", \"weaknesses\": \"1. The evaluation relies on the QuALITY dataset, which may not be representative of all types of small corpora. A broader range of datasets, particularly from diverse domains, would make the results more generalizable.\\n2. Although the authors attempt to mitigate hallucinations by grounding synthetic data generation in the original corpus, the risk of generating inaccurate information is inherent in using a language model for synthetic generation. This aspect needs further empirical examination, such as quantitative metrics to evaluate hallucination rates.\\n3. The approach relies on using strong language models like GPT-4 for synthetic data generation. The practical feasibility of using this approach might be limited if users do not have access to such models due to their computational cost. What if it was replaced with LLama 3 8B?\\n4. While the paper includes useful baselines such as \\\"Rephrase CPT,\\\" more comparisons with alternative data augmentation or synthetic generation methods from recent literature could strengthen the claim that EntiGraph is an effective strategy.\", \"questions\": \"1. How sensitive is the synthetic pretraining process to the specific hyperparameters used for entity extraction and relation analysis? Would tuning these parameters significantly affect the generated corpus quality?\\n\\n2. How does the synthetic corpus compare to a manually curated dataset in terms of quality and impact on downstream tasks?\\n\\n3. Could EntiGraph be used effectively in scenarios where entities are ambiguous or domain-specific (e.g., medical or legal texts)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a method to continue pretraining LLM with a synthetic data augmentation method. The method is based on expanding the training corpus with many verbalizations of the entity graph present in the training corpus. It moves from a sparsely verbalized entity graph to a more densily verbalized one by using only the source documents and prompting LLMs to generate the new tokens.\\n\\nThe paper shows that the method is beneficial for downstream tasks in closed- and open-book QA as well as RAG. \\n\\nOverall, I think the paper is worthy of acceptance, it propose a clean method with good results and the experiments are fairly convincing.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper does a good job at demonstrating the benefit of the synthetically generated data, by including relevant natural baselines.\\nThe proposed method seem to work well and can be useful for continued pre-training tasks.\", \"weaknesses\": \"The work relies on commercial and closed-source models (GPT4) for generating the synthetic data, making this work non-reproducible. Since the data generation process is the central contribution, it would have been interesting to have insights about how well different models can perform this data generation task.\\n\\nThe paper proposes only extrinsic evaluation of the generated data but does not provide intrinsic measures, i.e., how good is the generated text? \\n\\nIn my opinion, section 6 is not particularly useful. It is unnecessarily mathematical, based on simplistic assumptions and does not bring useful insights (For many continuously increasing lines, there anyway exists a mixture-of-exponential that fit it)\", \"questions\": \"For the data generator, what type of models are necessary to have good performance? (why use GPT4 and not open-source models)\\nThe paper shows that the generated data is useful, but how does it look like? (is it good quality text, factual, natural looking, ...) \\nWhat is the significance of section 6?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The additional experiments are quite helpful, I've changed my scores accordingly.\"}", "{\"metareview\": \"The paper proposes a data-synthesizing method for continued pretraining for adapting an LLM to a specific domain, where the proposed approach is based on EntiGraph including entities and their relations. In addition, the authors derived bounds for their scaling trends.\\n\\nAll reviewers agree that this is an interesting and solid paper. Reviewers have raised a few minor concerns and questions, which may be addressed in the revision.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers unanimously recommended acceptance.\"}", "{\"comment\": \"Thank you for your clarification and additional experiments. I will raise the score.\"}", "{\"title\": \"Thanks for the answer\", \"comment\": \"Thank you for the clarification and answers, I find the rebuttal experiments convincing and I'll keep my 'accept' score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"comment\": \"Thanks! Will keep my positive score.\"}", "{\"summary\": \"This paper addresses how to train LLMs in a data scarce regime, given that LLMs require O(1000) examples of a fact to actually \\\"learn\\\" it. This has applications both to niche corpora (e.g., mathematics textbooks) as well as to training larger models once all human text is exhausted. The authors propose to use a pre-trained LLM to (1) extract entities and summaries from a comparatively small, niche corpus, and (2) use the extracted entities to generate rephrased assertions about those entities, to facilitate learning by a second (here, smaller) LLM. They experiment with a 1.3M token reading comprehension dataset, and test the approach against several baselines, including closed-book tests on the LLM used to extract entities and the rephrased text used to train the second LLM. Finally, the authors present a mathematical model through which they attempt to understand the training behavior of this data augmentation system.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The experiments are convincing that the EntiGraph approach improves the LLM's ability to accurately answer questions about a small corpus. In particular the closed-book results in Figure 3 show that the EntiGraph approach leads to far more salient claims per false claim than any of the other models, including GPT-4, or training the LLM (Llama 3 8B). The benefit is substantially less in the open-book RAG case, but there is still substantial improvement. The theoretical model to explain how the model improves QA accuracy with increasing tokens provides some good intuition as to how the model learns.\\n\\nOverall the text is clear and easy to read.\", \"weaknesses\": \"I still have reservations that there is some amount of distillation of GPT-4 into their Llama 3 8B: it seems possible to me that a RAG-prompted GPT4 could generate additional information that is somehow \\\"unlocked\\\" by the RAG prompt, but which the closed-book version was unable to access. At the risk of anthropomorphizing, this is akin to a human getting a visual or audio cue and suddenly recalling whole complex memories. It would make the paper stronger to dig into the results of entity extraction and the generated text to see whether it is rephrasing/paraphrasing, or whether possibly actual new information is injected.\\n\\nEven so, it would have helped this reader to have pointed out the significance of the closed book experiments earlier on. It isn't stated explicitly until the Limitations section.\\n\\nI don't feel particularly qualified to check your proofs of theorems, and moreover I think the main value of the theoretical model is to help the reader understand intuitively why the approach works (these may be connected observations). Is all of the theory necessary? Perhaps a simulation would do as well?\\n\\nAnother issue is that much of the benefit of the approach vanishes (though not completely) when using a RAG model directly. Is this approach worth the extra training, given the modest gains? The core problem, really, is how many examples LLMs take to learn anything well. This paper finds a way to side-step that successfully, but doesn't solve it directly.\", \"questions\": \"The paper could be more robust if you had more than just the QuALITY dataset. It is a perennial problem to find hard datasets to work with, so I understand this may be all there is for now, but given the chance I would attempt to reproduce the results on a different set. The authors mention linear algebra (a much harder topic, I think): is there any corpus for that subject?\\n\\nThe presentation of how exactly you generate the text to train Llama 3 8B with EntiGraph is still a little fuzzy to me, in particular it would be nice to see some examples of what you generated. It is helpful to have the prompts, but some output always grounds the presentation. \\n\\nFinally, I imagine GPT-4t made errors in producing the training data--did you search for these? Even at a quick glance how often did it make errors, and what, if anything, did you do to filter them out?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer nNRV,\\n\\nThanks for your hard work and helpful feedback! Below we address your specific comments as best as we can, and we hope you will engage with us during the discussion period to clarify any remaining points.\\n\\n## Risk of Distillation from GPT-4 Data Generator\\nThank you for pointing out the risk of distillation with GPT-4. To more concretely test this, we have performed an experiment with a weaker Llama 3.1 8B Instruct generator, and discuss our results in the general comment.\\n\\n## Additional Datasets on Harder Topics\\n\\nThanks for suggesting that we test out Synthetic CPT on a harder dataset. We have conducted an experiment using lecture transcripts and the Coursera Exam QA dataset and discuss these results in the general comment. \\n\\n## Hallucination in Synthetic Data\\n\\nThanks for noting that we could see hallucinations with our synthetic data generator. We have conducted a human evaluation to test the factuality of EntiGraph-generated data, which we discuss in the general comment. We did not explicitly filter out hallucinations in the paper\\u2019s experiments, and ultimately find in the human evaluation that the hallucination rate is very low.\\n\\n## Role of Theory Section\\n\\nThe role of the theory section is to explain why Synthetic CPT does not need to create new knowledge de novo to improve performance. The core intuition is explained at the beginning of the section in L417-L421. Regarding the mathematical complexity, the main mathematical result is actually a simple generalization of the celebrated coupon collector\\u2019s problem with a minor twist: instead of collecting new coupons, EntiGraph collects new knowledge, but the probability of collecting a particular piece of new knowledge differs. As a result, instead of having an exponential growth as in the coupon collector\\u2019s problem, we end up with a mixture of exponential growth.\\n\\nWe hope that our response and additional experiments have adequately addressed your concerns about distillation, further evaluation on harder datasets, and hallucination in the synthetic data.\\n\\nWe would greatly appreciate it if you could engage with us during the discussion period on any remaining barriers to raising your score.\"}", "{\"summary\": \"This paper addresses the problem of data inefficiency in pretraining language models. Current pretraining corpora may not generalize effectively and models may benefit from structured, repeated, diverse representations of knowledge.\\n\\nThe proposed is a two-step process that (1) extracts entities from the corpus and then (2) extracts relationship information amongst a subset of the entities.\\n\\nExperimentation uses the QuALITY corpus and dataset, which is a benchmark for long-document reading comprehension. Evaluation compares with relevant baselines like training on the original QuALITY corpus and a corpus containing rephrasings.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The problem the work addresses is important.\", \"Experimental results show that this method scales better than simple paraphrasing or direct pretraining, and that retrieval-augmented generation further boosts performance of this model.\", \"The authors also present a theoretical model explaining EntiGraph\\u2019s log-linear scaling pattern, providing insights into the mechanics of synthetic data\\u2019s impact on learning efficiency.\", \"Paper is clear and well-written.\"], \"weaknesses\": \"While the experiments focus on the QuALITY corpus, it remains unclear how well this would apply to other domain-specific corpora or more complex fields (e.g., legal or math data).\", \"questions\": \"It says \\u201cWe generate data for pairs D_{Ei, Ej} and triplets D_{Ei, Ej, Ek} in our experiments\\u201d. I wonder if the authors have any intuition about how performance changes with the size of subset k.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer ipvv,\\n\\nThanks for your hard work and helpful feedback! Below we address your specific comments as best as we can, and we hope you will engage with us during the discussion period to clarify any remaining points.\\n\\n## Demonstrating Synthetic CPT with an Open-Source Synthetic Data Generator\\n\\nThank you for your suggestion to investigate whether Synthetic CPT works with a weaker, open-source synthetic data generator. We have performed this experiment and discuss our results in the general comment.\\n\\n## Intrinsic Evaluation of Generated Text\\n\\nThanks for suggesting that we provide intrinsic measures of the generated text. We have measured the factuality and lexical diversity of our synthetic corpora and discuss these results in the general comment.\\n\\n## Role of Theory Section\\nThe role of the theory section is to explain why Synthetic CPT does not need to create new knowledge de novo to improve performance. The core intuition is explained at the beginning of the section in L417-L421. Regarding the mathematical complexity, the main mathematical result is actually a simple generalization of the celebrated coupon collector\\u2019s problem with a minor twist: instead of collecting new coupons, EntiGraph collects new knowledge, but the probability of collecting a particular piece of new knowledge differs. As a result, instead of having an exponential growth as in the coupon collector\\u2019s problem, we end up with a mixture of exponential growth.\\n\\nWe hope that our response, experiments with a weaker generator, and intrinsic evaluation of the generated data have adequately addressed your concerns.\"}" ] }
07cehZ97Xb
How to Build a Pre-trained Multimodal model for Simultaneously Chatting and Decision-making?
[ "Zuojin Tang", "Bin Hu", "Chenyang Zhao", "De Ma", "Gang Pan", "Bin Liu" ]
Existing large pre-trained models typically map text input to text output in an end-to-end manner, such as ChatGPT, or map a segment of text input to a hierarchy of action decisions, such as OpenVLA. However, humans can simultaneously generate text and actions when receiving specific input signals. For example, a driver can make precise driving decisions while conversing with a friend in the passenger seat. Motivated by this observation, we consider the following question in this work: is it possible to construct a pre-trained model that can provide both language interaction and precise decision-making capabilities in dynamic open scenarios. We provide a definitive answer to this question by developing a new model architecture termed Visual Language Action model for Chatting and Decision Making (VLA4CD), and further demonstrating its performance in challenging automonous driving tasks. We build VLA4CD on the basis of transformer-based LLM architecture. Specifically, we leverage LoRA to fine-tune a pre-trained LLM with data of multiple modalities covering language, visual, and action. Unlike the existing LoRA operations used for LLM fine-tuning, we have designed new computational modules and training cost functions for VLA4CD. These designs enable VLA4CD to provide continuous-valued action decisions while outputting text responses. In contrast, existing LLMs can only output text responses, and current VLA models can only output action decisions. Moreover, these VLA models handle action data by discretizing and then tokenizing the discretized actions, a method unsuitable for complex decision-making tasks involving high-dimensional continuous-valued action vectors, such as autonomous driving. The extensive experimental results on the closed-loop autonomous driving platform CARLA validate that: (1) the model construction method we proposed is effective; (2) compared to the state-of-the-art VLA model, VLA4CD can provide more accurate real-time decision-making while retaining the text interaction capability inherent to LLMs.
[ "vision language action model; decision making; autonomous driving; multimodal" ]
https://openreview.net/pdf?id=07cehZ97Xb
https://openreview.net/forum?id=07cehZ97Xb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zmq8cnaprI", "oddjCu8DNs", "nCa6vedl3H", "bKDOg4N0kn", "Y71L1QXdyM", "XWgi3IR6mM", "W4mIJuzCDV", "VLXJ78KIbE", "T35bt7pWMj", "QopMnjJpfR", "Q8ojaxajAh", "LUEVVWaLLF", "Dg9v8diaT5", "7B1JtdFtP4", "61MRidcDQb", "3nV5SxFIsr" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732609710624, 1731659688065, 1731662101225, 1730621782897, 1732626427444, 1732626825252, 1732650100754, 1732609832179, 1732888081580, 1730529852342, 1732651031925, 1731640465431, 1730831692880, 1731650164421, 1731640876911, 1731640348045 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2963/Reviewer_UzGf" ], [ "ICLR.cc/2025/Conference/Submission2963/Authors" ], [ "ICLR.cc/2025/Conference/Submission2963/Authors" ], [ "ICLR.cc/2025/Conference/Submission2963/Reviewer_UzGf" ], [ "ICLR.cc/2025/Conference/Submission2963/Authors" ], [ "ICLR.cc/2025/Conference/Submission2963/Authors" ], [ "ICLR.cc/2025/Conference/Submission2963/Reviewer_UzGf" ], [ "ICLR.cc/2025/Conference/Submission2963/Reviewer_FHJw" ], [ "ICLR.cc/2025/Conference/Submission2963/Authors" ], [ "ICLR.cc/2025/Conference/Submission2963/Reviewer_vhvA" ], [ "ICLR.cc/2025/Conference/Submission2963/Reviewer_UzGf" ], [ "ICLR.cc/2025/Conference/Submission2963/Authors" ], [ "ICLR.cc/2025/Conference/Submission2963/Reviewer_FHJw" ], [ "ICLR.cc/2025/Conference/Submission2963/Authors" ], [ "ICLR.cc/2025/Conference/Submission2963/Authors" ], [ "ICLR.cc/2025/Conference/Submission2963/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I moved my comment under my review to avoid confusion. Sorry for the mistake!\"}", "{\"title\": \"Response to Questions\", \"comment\": \"***Response to Q1 and Q2:***\\n\\nYou have pointed out a very good issue. I apologize for the oversight in our writing. The entire lines 151-157 regarding the character description should be revised as follows:\\n\\n> \\\"We consider a multimodal setting similar to (Xiao et al., 2020), wherein, at each time step $t$, upon the agent performing an action $a_t$, the environment returns an observation consisting of both visual and textual modalities, denoted by $(o_t, \\\\hat{w}_t)$. Our objective is to build a generative model:\\n> \\n> $\\\\pi$(a_t, \\\\hat{w*}_t | o_t-H, \\\\hat{w}_t-H, a_t-H, ....... , o_t, \\\\hat{w}_t)\\n> \\n> which can generate both high-quality action decisions and text responses, given a sequence of historic observations and actions. Here, $\\\\hat{w*}_t $ denotes a text-formed response to the text-formed input $\\\\hat{w}_t$. If $\\\\hat{w}_t$ is a question, then $\\\\hat{w*}_t $ can be seen as its answer given by our model. $H$ denotes the length of the context.\\\"\\n\\n***Response to Q3:***\\n\\nFor simplicity, we omitted the index (n_i). We set the token counts for text and image inputs uniformly to 424 and 64, respectively, corresponding to the parameter `cutoff_len` in Table 6 and `num_patches` in Table 7.\\n\\n***Response to Q4:***\\n\\nAs shown in the different time series descriptions in Appendix 11, the text descriptions of Other Sensors Input 32 to Other Sensors Input 44 mainly come from fixed templates, with little variability, almost solely dependent on numerical changes. In real scenarios, this variability would significantly increase. Using the cross-entropy loss function may cause the model to struggle distinguishing between different time-step Other Sensors Input descriptions, leading to inaccurate predictions. As a result, the final output action values are almost identical, insensitive to numerical changes in text inputs, and unable to achieve finer-grained decision-making and differentiation. Therefore, adding label smoothing loss can enhance the model's perception of differences in Other Sensors Input descriptions at different time steps, making it more sensitive to numbers, and enabling finer-grained decision-making and control.\\n\\n***Response to Q5:***\\n\\nAlthough the VLA4CD (no language) model architecture is still based on Figure 1, with inputs still being images and text, ideally, human questions and {s_t^{l+1}, .., s_t^{l+n}} should be removed during training. However, to quickly verify the impact of removing language loss, we adopted the direct removal approach.\\n\\n***Response to Q6:***\\n\\nWhen balancing the two losses for DriverGPT4 and OpenVLA, we used the same hyperparameter settings as VLA4CD, with the specific parameter settings detailed on lines 263-264.\\n\\n***Response to Q7:***\\n\\nWe adopted the same discretization strategy for `action_bin` as OpenVLA and RT2, which involves discretizing continuous actions (acceleration and steering) into a fixed number of bins using a uniform discretization method. The discretized action values are then mapped to the end tokens of the pre-trained tokenizer's vocabulary to generate the corresponding tokens for actions. Similarly, these tokens can be inversely mapped back to the original continuous action values. This strategy allows encoding continuous actions as discrete tokens while preserving the precision of the continuous action information.\\n\\n***Response to Q8:***\\n\\nThis is a very good question. In Table 9, we compared the designs with and without image reconstruction loss when the inputs are both images and text. We found that the design with image reconstruction loss further enhances the final decision control. Our action space extends to multiple dimensions, which can be fully achieved by changing the output dimension of the action linear projection layer. Here, we only use two dimensions because we mainly consider the two key variables in autonomous driving scenarios.\\n\\n***In summary***\\n\\nOverall, you have raised very valuable and insightful comments. We will ensure to revise the paper and clearly address the points you mentioned. Finally, we sincerely hope that these explanations will alleviate your concerns, and we sincerely hope that you will reconsider your score.\"}", "{\"title\": \"Response to Weaknesses and Questions\", \"comment\": \"Thanks for your time and efforts in reviewing our paper! We highly appreciate your thoughtful and constructive suggestions. Your thoughtful and constructive suggestions have been invaluable to us, and we have carefully considered each comment. Our responses to your queries are outlined below:\\n\\n***Response to Weaknesses1:***\\n\\nThank you very much for your recognition of our proposed questions, but I would also like to take this opportunity to provide a more detailed introduction to our motivation. As you mentioned, \\\"As stated in the abstract, driving and chatting simultaneously, however, the proposed LLM-based model is autoregressive decoding. From the architecture overview, it can be seen that the model can only provide predictions after a very long answer output.\\\" The motivation behind the VLA4CD model is to simulate the human ability to handle multiple tasks in complex environments, particularly in the parallel processing of action decision-making and dialogue generation. With the rapid advancements in LLMs within the field of NLP, significant research has focused on fine-tuning pre-trained LLMs to perform various tasks in specific domains. Existing approaches often involve fine-tuning LLMs independently for each task using its respective training data. This approach not only incurs high computational costs but also isolates the knowledge of each task, preventing efficient sharing across tasks, which can lead to suboptimal overall performance.\\n\\nTo address these issues, we propose the VLA4CD integrated model. This model consolidates data from all tasks and performs unified fine-tuning on a single pre-trained LLM, while also incorporating specialized loss functions tailored for multimodal and multitask learning. This design significantly reduces computational overhead and, through efficient task sharing, enhances the overall performance of the VLA model in multitask scenarios.\\n\\n***Response to Weaknesses2:***\\n\\nWe propose the VLA4CD integrated model. This model consolidates data from all tasks and performs unified fine-tuning on a single pre-trained LLM, while also incorporating specialized loss functions tailored for multimodal and multitask learning. This design significantly reduces computational overhead and, through efficient task sharing, enhances the overall performance of the VLA model in multitask scenarios. \\n\\nYour point that \\\"the method of combining these two tasks (chatting and decision-making) does not actually predict simultaneously but sequentially\\\" is very insightful. Due to our unclear expression, it caused a misunderstanding. VLA4CD emphasizes not only multimodal output but also multimodal synergy, specifically sequential prediction. In autonomous driving tasks, the importance of decision-making is higher than question answering. Therefore, we can enhance decision-making through text and images, as shown in Table 4, where we maintain the input as images and text, and removing the language part from the loss function significantly reduces the system's decision-making ability, corresponding to line 483, \\\"we see that including language in the loss function significantly enhances the quality of decision-making.\\\" This means sequentially predicting A based on text sensor input and Q, and then predicting action based on images, text sensor input, and Q together. This sequential prediction highlights the priority and importance of the two tasks, which we believe is a very valuable point.\\n\\n***Response to Weaknesses3:***\\n\\nThank you for your detailed prompts. We will thoroughly check the errors in the paper and revise and explain the points you mentioned regarding the interpretation of our views.\\n\\n***Response to Questions1:***\\n\\nDuring training, the minimum batch size for images is 8, with a context length of 1 or 4 in each batch. During inference, real-time inference is performed for each frame image based on the environment.\\n\\n***In summary***\\n\\nOverall, you have raised very valuable and insightful comments. We will ensure to revise the paper and clearly address the points you mentioned. Finally, we sincerely hope that these explanations will alleviate your concerns, and we sincerely hope that you will reconsider your score.\"}", "{\"summary\": [\"The authors propose VLA4CD, a model for self-driving that is capable of generating both language and action tokens.\", \"VLA4CD is trained using three different types of loss functions: language generation, action generation, and image reconstruction.\", \"The trained model demonstrates superior performance in both decision-making and question answering compared to models such as DriverGPT4 and OpenVLA in the gym-carla environment.\"], \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The main contribution of this work is that the authors add a language generation capability to VLA in self-driving scenarios.\", \"The paper is well-structured, making it easy to read and understand the authors' approach.\", \"The finding that separating language and action prediction loss can improve decision-making is a significant contribution that provides valuable insights into how VLAs can be effectively trained. It is encouraging to see that this is empirically demonstrated to be useful in self-driving scenarios. However, it is concerning that introducing some language noise into the training dataset can have a considerable impact on decision-making processes. Since real-world datasets will inevitably contain substantial noise, developing methods to ensure robustness against such noise is essential for the model's practical application.\"], \"weaknesses\": [\"I don't quite understand why VLAs need to chat based on the author's motivation. Chatting is an inherently multi-turn conversation with a specific topic, but example of such capability of the model is completely lacking. I wonder what the authors' definition of chatting is. The model doesn't actually \\\"chat\\\" but simply outputs action description. It is far from the example in the introduction where authors want to build a model that can talk with a friend while driving.\", \"Text generation has already been explored with DriveGPT4. In this paper, text generation is not used for any novel applications other than simply translating action tokens into language. I fail to understand why does the author claim text+action generation is something novel since there's already a model that does it.\", \"Adding chat capabilities could potentially make the model less robust when exposed to noise in language interactions. Since the model learns to associate language with specific action tokens, any slight disruption to that association (e.g., due to noise) could significantly impair its action prediction performance. If the model can engage in unrestricted conversation, it is likely to encounter more noise, which could seriously affect its decision-making abilities, which is the most important goal of VLAs. It might be more effective for VLAs to focus solely on action prediction and incorporate chat functionality with separate models. With the current motivation, it seems there is no strong rationale or necessity to integrate chat capabilities into VLAs.\", \"Following the point above, I feel like the paper could be better framed on how to manage loss when training VLAs, which is a much interesting topic.\"], \"typo\": \"Figure 1 interatcion\", \"questions\": [\"How does the model compare to DriveGPT4 and why does DriveGPT4 do so bad? DriveGPT4 is doing exactly same thing as this model aims to do (text generation + action generation).\", \"Why are there no use case of the model actually chatting? How do the authors define chatting? The example of the introduction mentions the authors are inspired by human driver talking to a friend while driving, but the model doesn't actually engage in free form chat that goes beyond a single step.\", \"How does author plan to make the model robust to noise when exposed to unrestricted chat?\", \"Why should we add language generation capability to VLAs? The motivation for that seems non-existent in this paper and there's no novel use case of the generated language.\", \"Why do the authors think using separate loss for language and action generation (unlike DriveGPT4) improves the decision making performance?\", \"Why do the authors only focus on the self-driving task?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your feedback.\\n\\n***Response to W1:*** Thank you for your valuable suggestions. We can further clarify our motivation. Most existing models follow a 1+1 multimodal pattern, while our VLA4CD framework implements a 1+2 multimodal fine-tuning pattern in LLMs, which can be similarly extended to a 1+N multimodal fine-tuning pattern. Here, the first \\\"1\\\" represents the input of one modality, and the second \\\"2\\\" or \\\"N\\\" represents the output of two or N modalities. Our motivation is to fully utilize the information contained in one input modality to output multiple modalities while ensuring that the output modalities have the capability of their corresponding 1+1 models. Additionally, our experimental results show that VLA4CD performs exceptionally well in both QA and decision-making capabilities for specific scenarios compared to existing methods. Therefore, the proposed 1+2 fine-tuning approach can improve the utilization and decision-making efficiency of the model's input modalities, and it can be further extended to a 1+N pattern in the future. Furthermore, since the large model itself already possesses basic everyday chatting capabilities, there is no need to fine-tune the QA capability for everyday chatting through LoRA. Therefore, to differentiate the content from everyday conversational abilities, we chose task-specific QA for specific scenarios, aiming to distinguish it from the inherent chatting content of the original model and demonstrate the effectiveness of our method in expanding multitask capabilities.\\n\\n***Response to W2:*** Thank you for your valuable thoughts. We emphasize that the textual QA output is independent of actions for the following reasons: The input to our model is sensor input + random Q + image, where Q is randomly sampled from the question set in Appendix A.10, which includes questions related to daily driving scenarios and also empty Qs. Since Q is generated randomly at each time step, but VLA4CD demonstrates strong decision-making capabilities while answering random questions, as shown in Table 1, 2, and 3, this indicates that the decision-making capability is independent of the QA content, which also suggests that the QA content can be further expanded. However, it is important to note that the sensor input must be relevant to the current driving scenario, while the content of Q can be random. This can be seen in Table 5, where adding noise to the sensor input affects the final decision-making capability. Therefore, as long as the information in the sensor input is noise-free and the content of QA is expandable, the model can maintain strong robustness and ensure that the decision-making capability is not disturbed.\\n\\n***Response to W3:*** Thank you for your concern. We did not use the CARLA leaderboard method as a baseline. Instead, we proposed two main metrics: Driving Score (DS) and Average Reward (AR), where DS = ER * AR, as detailed in Appendix A.5. Therefore, DS is strongly correlated with AR. The calculation of AR is based on the same reward function $f$ [1] from the gym-carla Benchmark, similar to most RL evaluation methods. Our VLA4CD also uses the same reward function to evaluate different baselines' AR, which demonstrates the rationality of using AR for evaluation. The reward function in Appendix A.4 includes comprehensive evaluations of collision behavior, lateral speed control, longitudinal speed control, overspeed control, lane keeping, steering and driving stability, and static punishment. We added ER because, although AR covers most scenario evaluations, the static punishment in the original function f is only \\\"-0.1,\\\" which is insufficient. AR alone cannot consider the scenario of standing still. In the gym-carla Benchmark, the entire episode is reset if any collision or lane departure occurs. Therefore, we introduced DS = ER * AR, where ER = N_completed steps / N_total steps, representing the effective driving completion rate without penalty round interruption mechanisms. Thus, DS ensures evaluation under effective driving conditions, including more reasonable AR scores under static conditions. DS can be seen as a discount on AR scores, indicating that our evaluation is reasonable. From Table 1, Table 2, and Table 3, VLA4CD's AR and DS values are leading, which I believe is sufficient to demonstrate the effectiveness of our method.\\n\\n[1].Chen, Jianyu, Shengbo Eben Li, and Masayoshi Tomizuka. \\\"Interpretable end-to-end urban autonomous driving with latent deep reinforcement learning.\\\" IEEE Transactions on Intelligent Transportation Systems 23.6 (2021): 5068-5078.\"}", "{\"comment\": \"Thank you very much for your feedback.\\n\\nThank you for your responses regarding W1, Q2, Q3, and Q4. Please refer to the Post-Rebuttal responses to reviewer FHJw for W1, W2, and W3 for detailed information.\"}", "{\"comment\": \"W1: I understand that using 2 large models for chatting and decision making is computationally costly, however, that is not a strong motivation for combining two since this operates on an assumption that you need a on-board chatting LLM. Furthermore, if authors want to prove that multitask training improves driving performance, I recommend comparing it with non-chatting baselines on standard metrics like reviewer RHJw suggested. In the end, a self-driving LLM is of no practical use if it's core action generation capability lags behind other baselines. Chatting is just an auxiliary feature.\", \"q1\": \"Makes sense, thanks!\", \"q2\": \"I agree with the definition of chatting, can the authors provide a metric or evaluation that proves that the VLA4CD retains it's chatting capability? There are works that show some mix of instruction tuning data harms general conversational capability (e.g. https://aclanthology.org/2024.acl-srw.15.pdf).\", \"q3\": \"It makes sense but I don't think any of that is empirically shown in the paper?\", \"q4\": \"Same concern as W1.\\n\\nQ5&6: Makes sense.\"}", "{\"title\": \"Post-Rebuttal\", \"comment\": \"Huge thanks to the authors for duly responding to my concerns about the current work.\\n\\nPost the author comments, the current standings of the three weaknesses that I initially raised are as follows:\\n\\n(W1) **Unclear motivation:**\\n\\nThe authors' response \\\"Our model retains the inherent language generation and everyday chatting capabilities of large language models, although we did not demonstrate this in the paper.\\\" is a claim that is ungrounded empirically. If the authors propose to use the original weights of the language model (without LoRA weight offsets) for open-ended dialog (thus retaining its abilities), then the novelty of this contribution is further diminished as it simply demonstrates LoRA's ability and not that of the current work.\\n\\n(W2) **Relatedness of text responses to action outputs, evaluation:**\\n\\nAs the authors' respond that the textual outputs to the questions are *independent* of the actions, this seriously questions the motivation further. In essence, a model can get away with generating response that look-like text without being grounded in the current state or any semblance of soundness. This is a huge concern wrt application of this work.\\n\\n(W3) **No comparisons to performance numbers from literature**:\\n\\nAll quantitative numbers reported are the authors' experiments on customized settings within CARLA environment, without comparisons to numbers from prior works (despite pointing out some options in the review), thus raising questions around the soundness of empirical comparisons. Author's response that reinforcement learning requires customized benchmarks is not valid as other related works do consider consistent comparisons to systematically benchmark progress (see review).\\n\\nAs a result, I'd like to maintain the current ratings and encourage the authors to more empirically address these weaknesses in further iterations of this work.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper opposed to simulating MLLMs as human in a real-world situation that require both chatting and decision making. For example, human drive can drive safely while having conversations with passengers. This is an important application problem in autonomous driving system.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The problem that developing a chatting and simultanenous decision making, itself is underexplored and important.\\n2. The proposed model gives both reasonable question answering output and reasonable action output.\", \"weaknesses\": \"1. I don't buy the idea that by simply concatenating the question answer data and action prediction data together and supervised finetune the LLaVA-like MLLM can solve the proposed problem. As described in the abstract, driving and chatting simultaneously, however the proposed LLM-based model is autoregressive decoding. From the architecture overview, we can see this proposed model can only provide a prediction after a very long answer output. No inference speedup or simultaneous decoding technology is being used or proposed to achieve this.\\n2. The only contribution to me seems combining action prediction and question answer data, which is very trivial. No siginificant improvement is achieved compared with specialist model in each single task. And the approach to combine these two tasks chatting and decision making tasks are not actually achieving prediction both but sequentially.\\n3. The paper neeeds to be revised in the writing. Such as notions in Figure 1 is totally missing. Training and architecture details are missing in Experiments section, etc.\", \"questions\": \"1. What is the average number of images per training and inference case?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Further concerns:\", \"w1\": \"How do you ensure chat ability is not affected by your finetuning? There's no empirical evidence on that and there are cases where general open-ended dialogue capability gets harmed by finetuning on specific downstream tasks (look at my comment above for reference).\", \"w2\": \"How do you ensure that the system is robust toward noise in the dialogue? Since open-ended dialogue inherently will contain scenarios unrelated to driving, it needs to be robust toward any type of dialogue but the paper does not show that.\", \"w3\": \"You mentioned that \\\"We did not use the CARLA leaderboard method as a baseline,\\\" and failed to explain why you did not do that other than explaining your metric. I am still skeptical of why authors chose a difficult path of implementing their own metrics while you could easily compare the model with public baseline using the same environment.\"}", "{\"title\": \"Response Part (2/3)\", \"comment\": \"***Response to Q2: Why are there no use case of the model actually chatting? How do the authors define chatting? The example of the introduction mentions the authors are inspired by human driver talking to a friend while driving, but the model doesn't actually engage in free form chat that goes beyond a single step.***\\n\\nThe motivation behind the VLA4CD model is to simulate the human ability to handle multiple tasks in complex environments, particularly in the parallel processing of action decision-making and dialogue generation. With the rapid advancements in LLMs within the field of NLP, significant research has focused on fine-tuning pre-trained LLMs to perform various tasks in specific domains. Existing approaches often involve fine-tuning LLMs independently for each task using its respective training data. This approach not only incurs high computational costs but also isolates the knowledge of each task, preventing efficient sharing across tasks, which can lead to suboptimal overall performance.\\n\\nTo address these issues, we propose the VLA4CD integrated model. This model consolidates data from all tasks and performs unified fine-tuning on a single pre-trained LLM, while also incorporating specialized loss functions tailored for multimodal and multitask learning. This design significantly reduces computational overhead and, through efficient task sharing, enhances the overall performance of the VLA model in multitask scenarios.\\n\\nWe define \\\"chatting\\\" as open-ended, multi-turn conversations, and VLA4CD enhances its parallel generation capabilities in decision-making and dialogue tasks through LoRA fine-tuning. Our model retains the inherent language generation and everyday chatting capabilities of large language models, although we did not demonstrate this in the paper. In the paper, we only showcased dialogues related to driving scenarios, as large language models inherently possess everyday chatting abilities.\\n\\n***Response to Q3: How does author plan to make the model robust to noise when exposed to unrestricted chat?***\\n\\nGreat question! The key points you mentioned are how the model maintains robust decision-making capabilities in the presence of increased noise and how to extend VLA4CD's everyday chatting abilities by altering the QA system content.\\n\\nAs seen in Table 5, when sensor input noise increases, the model's decision-making capabilities are indeed affected. This is because noise interferes with the model's perception input, thereby affecting its decisions based on these inputs. However, VLA4CD is designed to minimize this impact when sensor inputs are relevant to driving tasks. Therefore, as long as the sensor input information is noise-free, the model can maintain strong robustness, ensuring that decision-making capabilities are not disrupted.\\n\\nFor the need to extend everyday chatting capabilities, we can follow this design approach. When expanding the QA system, we can keep the sensor input relevant to the driving scenario and only adjust the QA dataset content. This approach does not interfere with the model's decision-making ability, as modifying the QA content does not involve changes to sensor data or environmental perception data. Therefore, by reasonably adjusting and expanding the QA pairs, the model can maintain efficient decision-making capabilities in driving tasks and exhibit stronger dialogue capabilities in extended chatting scenarios.\\n\\nIn short, as long as the sensor input is effective and stable, the VLA4CD model can maintain high robustness and efficiency in decision tasks while adding everyday chatting functions.\\n\\n***Response to Q4: Why should we add language generation capability to VLAs? The motivation for that seems non-existent in this paper and there's no novel use case of the generated language.***\\n\\nThe VLA4CD model mimics human multitasking abilities in complex environments, aiming to generate multiple outputs simultaneously to enhance decision efficiency and user experience. Unlike traditional single-task models, VLA4CD handles both action decision-making and language generation tasks, simplifying system architecture and reducing coordination issues. By integrating language generation capabilities into a unified model, VLA4CD better adapts to complex tasks, improving decision consistency and effectiveness. Inspired by human ability to simultaneously make decisions and converse in autonomous driving scenarios, VLA4CD was initially designed for specific tasks but innovates by extending multitask processing capabilities, enabling the model to handle a broader range of applications, enhancing adaptability and interactivity. For example, the VLA4CD model can manage multiple tasks in smart home management, such as adjusting temperature, controlling lighting, and checking security devices; it can also parallel process tasks in scenarios like controlling multiple robotic arm systems, showcasing its advantages in multitask parallel processing.\"}", "{\"summary\": \"The current manuscript proposes to build a large language model (LLM) capable of understanding multiple modalities like text, vision, and actions; and producing them as outputs. In particular, it develops a Visual Language Action model for Chatting and Decision Making (VLA4CD) that produces continuous actions without losing its ability to chat with a user simultaneously. Notably, the action space is not discretized and kept continuous, unlike prior works in this area. The paper also demonstrates experiments on CARLA dataset to claim that this approach is effective and can provide real-time decision making compared to prior art, while retaining its text interaction capability.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"(S1) In general, the intended direction of this work, i.e., a model that can take actions while retaining the ability to generate textual responses to a user is useful. Please see weaknesses for further discussion.\\n\\n(S2) The technical details as presented in the paper are easy to understand and follow.\", \"weaknesses\": [\"(W1) The current manuscript suffers from a clear lack of motivation for why we need a model that can produce both actions and also \\u201cchat\\u201d (L21, for instance) with a user. There are two main problems here:\", \"Throughout, the ability of \\u201cchatting with people\\u201d (L88) has not been characterized well. It is not open-ended dialog on any topic but rather an explanation of what actions to take or why it has taken a certain action in a given situation. This is misleading as currently phrased.\", \"Much of the motivation is around \\u201ca human driver can operate while chatting with a friend\\u201d, which does not apply to why we need a unified model. For instance, why not have an actuation model and an open-ended dialog model in the autonomous vehicle to achieve the above desired motivation? This indicates the lack of a clear motivation from an application standpoint.\", \"(W2) Even if one were to scope the \\u201cchatting with users\\u201d ability down to producing explanations as responses to a fixed set of templated questions (see A.10), the manuscript does not follow through via corresponding experiments. Both actions and text-generation capability has been evaluated independently, once again begging the question as to why such a unified system is useful. There are no experiments to verify the following:\", \"The model actually actuates based on the textual outputs? I.e., if the model responds with \\u201cI will take the right lane in 20 mins\\u201d, does it actually do that?\", \"Are these textual explanations correct/sound given the state of the environment?\", \"What is the correlation of the GPT-4o score evaluation with human evaluation?\", \"(W3) There are some concerns around the experimental validation of the proposed methodology:\", \"The reported experiments on town03 and town04 from the CARLA environment do not seem to match with any of the existing benchmarks with prior works (C, D).\", \"To further exacerbate this issue, none of the baseline results are from literature and have been reported based on reproductions in this work.\", \"Missing baselines, see [A] for more information.\", \"This raises serious questions about the efficacy and usefulness of the proposed methods from an empirical standpoint. Why were existing, standardized benchmarks not used for model comparisons? Request the authors to address these concerns without which the benefits of this approach will remain unclear.\", \"References\", \"[A] DriveMLM: Aligning Multi-Modal Large Language Models with Behavioral Planning States for Autonomous Driving. https://arxiv.org/pdf/2312.09245.\", \"[B] Think2Drive: Efficient Reinforcement Learning by Thinking with Latent World Model for Autonomous Driving (in CARLA-v2). https://arxiv.org/pdf/2402.16720\", \"[C] CARLA Autonomous Driving Leaderboard. https://leaderboard.carla.org/\", \"[D] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving. https://arxiv.org/pdf/2205.15997\"], \"questions\": \"(Q1) L154: Do you also include model textual response \\\\hat{w}_i, i = {1,..,H} in w_i?\\n\\n(Q2) Eq 1: \\\\hat{w} is overloaded as both the model textual response and the embeddings of text inputs\\n\\n(Q3) Eq 1: Each of w_i might have a different number of tokens (different from n). Do you pad them to n or is the index (n_i) dropped for brevity? That is: (w_i^1, w_i^2\\u2026.w_i^{n_i}) instead of just (w_i^1, w_i^2\\u2026.w_i^{n})\\n\\n(Q4) L222-L225: The observed phenomenon is not clear here. Referring to the appendix also doesn\\u2019t add more details, apart from the empirical observation. Can the authors describe this with an example? \\n\\n(Q5) VLA4CD (no-language): What is the architecture, inputs for this model? Ideally, the human question in the input and the {s_t^{l+1}, .., s_t^{l+n}} must be removed while training.\\n\\n(Q6) L413: How did you balance the two losses for DriverGPT4? Did you have a hyperparameter search for the loss weights similar to your approach?\\n\\n(Q7) L477: The reasoning here is heavily dependent on the discretization strategy used for each environment. How were the actions discretized for this environment? Was there a hyperparameter search performed to get the best strategy?\\n\\n(Q8) L140-142: How is this problem avoided in the current setup? It\\u2019s not clear in the text here.\\n* Action space dimension is small, i.e., 2 (acceleration and steering) How does this scale with more variables?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Weaknesses\", \"comment\": \"Thanks for your time and efforts in reviewing our paper! We highly appreciate your thoughtful and constructive suggestions. Your thoughtful and constructive suggestions have been invaluable to us, and we have carefully considered each comment. Our responses to your queries are outlined below:\\n\\n***Response to Weaknesses1: \\\"What is the motivation of our paper?\\\"***\\n\\nThe motivation behind the VLA4CD model is to simulate the human ability to handle multiple tasks in complex environments, particularly in the parallel processing of action decision-making and dialogue generation. With the rapid advancements in LLMs within the field of NLP, significant research has focused on fine-tuning pre-trained LLMs to perform various tasks in specific domains. Existing approaches often involve fine-tuning LLMs independently for each task using its respective training data. This approach not only incurs high computational costs but also isolates the knowledge of each task, preventing efficient sharing across tasks, which can lead to suboptimal overall performance.\\n\\nTo address these issues, we propose the VLA4CD integrated model. This model consolidates data from all tasks and performs unified fine-tuning on a single pre-trained LLM, while also incorporating specialized loss functions tailored for multimodal and multitask learning. This design significantly reduces computational overhead and, through efficient task sharing, enhances the overall performance of the VLA model in multitask scenarios.\\n\\nWe define \\\"chatting\\\" as open-ended, multi-turn conversations, and VLA4CD enhances its parallel generation capabilities in decision-making and dialogue tasks through LoRA fine-tuning. Our model retains the inherent language generation and everyday chatting capabilities of large language models, although we did not demonstrate this in the paper. In the paper, we only showcased dialogues related to driving scenarios, as large language models inherently possess everyday chatting abilities.\\n\\n***Response to Weaknesses2:***\\n\\nIn both generating dialogues and action generation capabilities, we generate them simultaneously, as shown in Figure 5. The independent evaluation of text and action generation capabilities is solely to demonstrate the comparison between VLA4CD and models with only single dialogue or decision-making abilities. The model does not actually execute actions based on text outputs; as shown in Figure 5(a), we unify the action outputs for decision control to maintain consistency. Regarding text explanations, we use GPT-4o for scoring, as shown in Figure 4, where VLA4CD outperforms other solutions in this scenario, but cannot guarantee 100% correctness. Since GPT-4o has strong language understanding capabilities and avoids human evaluator subjectivity, we chose the third-party GPT-4o for scoring, which is also a mainstream evaluation method currently[1][2][3].\\n\\n[1] Fu, Jinlan, et al. \\\"Gptscore: Evaluate as you desire.\\\" arXiv preprint arXiv:2302.04166 (2023).\\n\\n[2] Geng, Xinyang, et al. \\\"Koala: A dialogue model for academic research.\\\" Blog post, April 1 (2023): 6.\\n\\n[3] Sun, Zhiqing, et al. \\\"Principle-driven self-alignment of language models from scratch with minimal human supervision.\\\" NeurIPS 2024.\\n\\n***Response to Weaknesses3:***\\n\\nWe use benchmarks based on the Gym-Carla environment, designed specifically for reinforcement learning agents in autonomous driving scenarios. Since the core of reinforcement learning is optimization through reward functions, we customized the evaluation matrix according to task requirements, as detailed in Appendix A.5, which defines multiple evaluation criteria. These criteria may differ from traditional benchmarks in the CARLA leaderboard, leading to differences in experimental results. We use these customized evaluation matrices because our dataset is collected based on reinforcement learning agents, whose behaviors and adaptability need to be evaluated through reinforcement learning-specific methods. To better assess the explicit feedback mechanism and continuous learning capabilities we propose, these customized matrices can more accurately reflect the actual effectiveness of the method.\"}", "{\"title\": \"Response Part (3/3)\", \"comment\": \"***Response to Q5: Why do the authors think using separate loss for language and action generation (unlike DriveGPT4) improves the decision making performance?***\\n\\nWe believe that using separate loss functions for language and action generation (unlike DriveGPT4) can improve decision performance mainly because this design avoids task conflicts and enables effective parallel processing of multiple tasks. In DriveGPT4, text generation and action generation share the same generation process, leading to conflicts between the two tasks, especially in complex environments where the model cannot simultaneously generate efficient text and action instructions. As a result, DriveGPT4 cannot guarantee fine-grained action instructions at each moment, affecting its decision accuracy and dialogue capabilities.\\n\\nIn contrast, VLA4CD sets up independent objective functions for text generation and action generation, ensuring that each task can be processed efficiently in parallel, avoiding task conflicts. This design allows VLA4CD to generate text and action instructions simultaneously in complex tasks, thereby enhancing the model's decision-making ability and efficiency in multitask environments. Through independent loss functions, VLA4CD ensures the effectiveness and completeness of each output when handling multi-objective tasks, improving the model's adaptability and decision performance.\\n\\n***Response to Q6: Why do the authors only focus on the self-driving task?***\\n\\nThe question you raised is very important and indeed worth in-depth discussion. Our current research primarily focuses on autonomous driving tasks because we built the multimodal fusion, alignment, and system loss functions from scratch, fine-tuning a large model based on Llama-7B, and completed dataset collection, model establishment, and full evaluation and inference design. This process consumed a significant amount of time and cost, so we decided to first implement and evaluate the model in the autonomous driving scenario.\\n\\nHowever, this does not mean that we are limited to the autonomous driving field. In the future, we fully intend to extend this approach to other areas such as robotic arms and achieve multiple objective task outputs (e.g., parallel processing of three or more tasks). Autonomous driving is merely the starting point for verifying and testing this multitask processing framework. As the technology matures and the model is optimized, we plan to promote it to broader application scenarios.\\n\\n***In summary***\\n\\nOverall, you have raised very valuable and insightful comments. We will ensure to revise the paper and clearly address the points you mentioned. Finally, we sincerely hope that these explanations will alleviate your concerns, and we sincerely hope that you will reconsider your score.\"}", "{\"title\": \"Response Part (1/3)\", \"comment\": \"Thanks for your time and efforts in reviewing our paper! We highly appreciate your thoughtful and constructive suggestions. Your thoughtful and constructive suggestions have been invaluable to us, and we have carefully considered each comment. Our responses to your queries are outlined below:\\n\\n***Response to Weaknesses1: \\\"What is the motivation of our paper?\\\"***\\n\\nThe motivation behind the VLA4CD model is to simulate the human ability to handle multiple tasks in complex environments, particularly in the parallel processing of action decision-making and dialogue generation. With the rapid advancements in LLMs within the field of NLP, significant research has focused on fine-tuning pre-trained LLMs to perform various tasks in specific domains. Existing approaches often involve fine-tuning LLMs independently for each task using its respective training data. This approach not only incurs high computational costs but also isolates the knowledge of each task, preventing efficient sharing across tasks, which can lead to suboptimal overall performance.\\n\\nTo address these issues, we propose the VLA4CD integrated model. This model consolidates data from all tasks and performs unified fine-tuning on a single pre-trained LLM, while also incorporating specialized loss functions tailored for multimodal and multitask learning. This design significantly reduces computational overhead and, through efficient task sharing, enhances the overall performance of the VLA model in multitask scenarios.\\n\\n***Response to Weaknesses 2 and 3: Please refer to the answers below for Q1, Q2, and Q3.***\\n\\n\\n***Response to Q1: How does the model compare to DriveGPT4 and why does DriveGPT4 do so bad? DriveGPT4 is doing exactly same thing as this model aims to do (text generation + action generation).***\\n\\nCompared to DriveGPT4, VLA4CD demonstrates significant advantages, particularly in multitask parallel processing and output effectiveness.\\n\\nFirstly, DriveGPT4 is designed with a multimodal input but single-modal output framework, generating both text and action instructions through a detokenizer. However, this approach has inherent limitations, especially in generating fine-grained decision instructions at each moment. This is because text generation and action generation tasks in DriveGPT4 are not independent but share the same generation process. This leads to task conflicts, particularly in complex scenarios, where the model cannot simultaneously generate efficient text and actions. As shown in Figure 2 and Tables 1, 2, and 3, due to this task conflict, DriveGPT4 cannot consistently generate efficient and complete text and action instructions, severely affecting its decision accuracy and dialogue capabilities.\\n\\nIn contrast, VLA4CD employs a multimodal input and multimodal output architecture, with separate objective functions specifically for text generation and action generation. This ensures that the model can generate efficient text and action instructions simultaneously when handling complex tasks. By designing independent objective functions, VLA4CD can output text and actions in parallel, avoiding task conflicts. This parallel processing not only simplifies system design but also significantly enhances the model's decision-making ability and efficiency in multitask complex environments.\\n\\nMoreover, VLA4CD's parallel processing mode effectively improves the model's adaptability in multi-objective tasks, ensuring the effectiveness and completeness of text generation and action decision-making. In contrast, DriveGPT4, due to its shared generation mechanism, often struggles with task conflicts, resulting in the inability to generate high-quality text and precise action instructions simultaneously. This parallel design makes VLA4CD far superior to DriveGPT4 in multi-objective tasks.\"}" ] }
07ZaA3MiL0
Consistent Iterative Denoising for Robot Manipulation
[ "Ye Niu", "Sanping Zhou", "Yizhe Li", "Ye Deng", "Le Wang" ]
Robot manipulation in complex scenarios usually involves multiple successful actions, which requires generative models to estimate the distribution of various successful actions. In recent years, the diffusion model has been widely studied in many robot manipulation tasks. However, the diffusion model experiences inconsistent noise supervision across various action labels and denoising timesteps, which compromises accurate action prediction. On the one hand, CIDM designs new noise supervision to avoid interference between different successful actions, leading to consistent denoising directions. On the other hand, CIDM unifies all denoising timesteps, avoiding inconsistent predictions of the diffusion model over different timesteps. Moreover, we also designed a novel radial loss to make the model focus on denoising results rather than iterative process routes. Our method achieves a new state-of-the-art performance on RLBench with the highest success rate of 82.3\% on a multi-view setup and 83.9\% on a single-view setup.
[ "robot manipulation", "consistent iterative denoising", "diffusion model", "imitation learning" ]
https://openreview.net/pdf?id=07ZaA3MiL0
https://openreview.net/forum?id=07ZaA3MiL0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kLIH5biGQi", "fLdTXhDHtp", "bXqNF5lhjJ", "Q5rgmNS2pA", "4D4k6iIpqx" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730655876767, 1730614147370, 1732619999260, 1730602683984, 1730570141916 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4510/Reviewer_fvRZ" ], [ "ICLR.cc/2025/Conference/Submission4510/Reviewer_Z53t" ], [ "ICLR.cc/2025/Conference/Submission4510/Authors" ], [ "ICLR.cc/2025/Conference/Submission4510/Reviewer_HNF4" ], [ "ICLR.cc/2025/Conference/Submission4510/Reviewer_RTKz" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a novel approach to denoising in diffusion models for robot manipulation tasks. The authors suggest replacing the standard noising/denoising process with a Langevin dynamics denoising field based on a signed distance function (SDF). This field serves as a deterministic gradient for denoising, aiming to improve temporal consistency and convergence to the ground truth action. Additionally, the authors introduce an alternative radial loss function to optimize the denoising network. The method is evaluated on RLBench in simulation.\\n\\nThe paper presents a potentially novel idea by introducing an SDF-based denoising field for diffusion in robot manipulation tasks. However, the clarity of the writing and the consistency of the mathematical formulations require improvement to ensure a thorough understanding of the proposed method. The limited number of trials and the ambiguities surrounding the evaluation metrics raise concerns about the robustness of the results. The authors should address the discrepancies observed in the convergence behavior and provide a more thorough explanation of the method's capabilities and limitations. Addressing the weaknesses and questions identified in this review would significantly strengthen the paper's contribution and impact.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The proposal of a deterministic denoising schedule using an SDF is an interesting alternative to traditional diffusion methods. This approach has the potential to enhance temporal consistency and guide the denoising process more directly towards the ground truth action. The ablation studies presented provide evidence supporting the effectiveness of individual components of the proposed method in specific scenarios.\", \"weaknesses\": \"1. The applicability of the proposed method appears limited to 2D robotics tasks with end-effector movements, such as tabletop manipulation. The authors do not demonstrate how this approach can be extended to other types of actuations, such as gripper control.\\n2. The paper seems to present a potential misunderstanding regarding the capabilities of diffusion models. It is suggested that diffusion models may produce the same noisy action for different successful actions. However, diffusion models are capable of learning multimodal action distributions through the denoising process, even in cases of overlapping Gaussians.\\n3. Figure 4 raises questions about the convergence behavior of both the proposed method and the standard diffusion model. In scenarios with multiple successful actions (represented by four red triangles), both methods appear to collapse to a single ground truth action. This behavior contradicts the expectation that these models should be able to learn a multimodal distribution and converge to all valid solutions.\\n4. The paper lacks clarity on why the proposed method (CIDM) converges to only one ground truth action in Figure 4, despite demonstrating the ability to learn a bimodal distribution in Figure 2. It remains unclear why the method does not capture the four-modal distribution evident in the task.\\n\\nThere are a number of places in the text where the authors could provide clarification.\\n\\n### Confusing text\\n\\n1. Line 83: The statement \\\"robot manipulation prefers to sample initial actions over the entire action space\\\" is unclear. It is possible the authors intend to convey that the training data covers the entire action space, but the phrasing is ambiguous and requires clarification.\\n2. Line 175: The phrase \\\"After eliminating the effects of specific successful action $\\\\hat{y}$\\\" is vague. It is unclear what is meant by \\\"eliminating the effects.\\\" Specifying the mathematical operation, such as marginalizing out $\\\\hat{y}$, would improve clarity.\\n3. Figure 3 caption, line 337: The caption could improve significantly. It states that the list of 14 tasks tasks are \\\"highly representative.\\\" Highly representative of what?\\n4. The paper lacks details on the experimental setup, particularly regarding the number of trials and seeds used for evaluation. The authors state that results are based on four trials per task, but it is unclear how many random seeds were used to ensure the reliability of the results.\\n5. The metric \\\"success probability\\\" requires further explanation. If it is calculated based on four trials per task, the possible values should be limited to [0, 25, 50, 75, 100]%. However, Table 2 presents values such as 82.7%, suggesting a different calculation method or a larger number of trials.\\n6. Equations 12 and 13 contain an error. The 2-norm $\\\\|y - \\\\hat{y}\\\\|$ cannot be less than a negative number ($c<0$).\\n7. Equations 12, 13, and 14 define the denoising field in a way that seems counterintuitive. The denoising field should be $\\\\epsilon_x(y) = \\\\hat{y} - y$ to ensure that a single denoising step, $y + \\\\epsilon_x(y)$, results in the ground truth action $\\\\hat{y}$. The gradient should point towards the ground truth, not away from it.\\n8. Line 285: The authors claim to be learning a denoising field independent of $\\\\hat{y}$. However, the training data includes $\\\\hat{y}$, suggesting that the model likely learns $\\\\hat{y}$ implicitly. This statement requires clarification or justification.\\n9. Table 1 caption: The caption states that underlined text indicates \\\"suboptimal performance for each column,\\\". Does this mean the second-best performance or some other criterion? Additionally, not every column has an underlined number.\\n\\nThe paper could benefit from additional explanations and clarifications to enhance the reader's understanding of the proposed method. \\n<!-- The authors could have utilized the extra to address some of the ambiguities and provide more detailed insights. -->\\n\\n## Minor Typos\\n\\n1. Line 15: \\\"CIDM\\\" is used before its introduction in line 144.\\n2. Line 77: \\\"noises supervision signals\\\" should be \\\"noise supervision signals.\\\"\\n3. Line 93: \\\"Additionally, We\\\" should be \\\"Additionally, we.\\\"\\n4. Equation 1, line 161: If referring to the DDPM scheduler, the term inside the square root should be $(1 - \\\\bar{\\\\alpha_t})$, not $(1 - \\\\bar{\\\\alpha_t^2})$.\\n5. Line 28: The statement \\\"Robot manipulation mainly involves two steps, acquiring effective scene representation and predicting correct actions\\\" oversimplifies the complexity of robot manipulation, which also involves elements of execution on hardware and reactive control.\", \"questions\": \"The authors are encouraged to address the weaknesses identified in this review and provide clarifications on the points raised in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a novel Consistent Iterative Denoising Model (CIDM) aimed at improving action prediction in robot manipulation tasks by addressing issues with diffusion models, specifically noise inconsistency and timestep variations. CIDM introduces two core innovations: (1) a consistent denoising field, which ensures clear denoising directions and temporal consistency across actions, and (2) a radial loss function that emphasizes actions with minimal noise to achieve more accurate iterative denoising.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces a novel approach to robot manipulation using a diffusion model, addressing limitations of traditional methods by incorporating a consistent denoising field and a radial loss function.\", \"Empirical rigor is demonstrated through extensive experiments on the RLBench benchmark, showing clear performance gains over baseline methods. The ablation studies further validate the contribution of each CIDM component, enhancing confidence in the results.\", \"By addressing practical challenges in action prediction for complex robot tasks, CIDM enhances the applicability of diffusion models.\"], \"weaknesses\": [\"While the paper presents a novel application of iterative denoising to robot manipulation, it lacks a theoretical analysis(Like some other articles on diffusion dynamics$^{[1]}$). Highlighting unique theoretical insights or algorithmic innovations would better justify CIDM\\u2019s position in the field.\", \"The introduction of a radial loss function, while conceptually sound, lacks comprehensive theoretical grounding or references to similar existing loss functions used in other domains. This gap makes it challenging to assess the robustness and scalability of the loss. Providing a more detailed theoretical analysis or justifying it with additional related work on spatial consistency in generative models could clarify its effectiveness.\", \"The current evaluation focuses on RLBench, but it would significantly benefit from testing in other robotic benchmarks or real-world scenarios to assess generalization capabilities. Evaluating CIDM's performance across tasks with varying levels of action complexity, such as multi-step manipulation in dynamic environments, would enhance the robustness claims.\", \"Temporal consistency is claimed to improve denoising stability across timesteps, but the scalability of this approach remains uncertain for long-duration tasks. Additional evaluations on tasks requiring extended sequences of actions (beyond 100 timesteps) could illustrate CIDM\\u2019s scalability and stability in prolonged scenarios.\", \"[1] Liu, X., Gong, C., & Liu, Q. (2022). Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003.\"], \"questions\": [\"Can the author provide more detailed explaination of figure1, I'm not sure I understood (b) correctly, especially the blue circles in it.\", \"Equation 12 and 13 seem to have some typos, I think it should be $\\\\exists c>0$.\", \"The value of denoised field (eq 14) is based on the value of 2-norm between noisy action and successful action. The implicit assumption here is that the 2-norm in the action space is well defined. This assumption is not obvious as the common action space may contains position, angle, linear velocity, angular velocity, torque... The 2-norm between two actions doesn't necessarily make sense.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper aims to solve the inconsistent noise supervision issue in diffusion models. The inconsistency comes from two sources. One source is the multi-modal action labels. The other is time-varying noise in denoising steps. They propose a novel consistent iterative denoising model and a new radial loss to address this issue. The proposed method is tested on RL Bench against other baselines.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper clearly illustrates the problem, their motivation to propose the new components to diffusion models and the contributions.\\n2. The paper provides theoretical analysis to formalize the problem. \\n3. The paper shows good results on RL Bench and does ablation studies over the different proposed components to show the importance of each part.\", \"weaknesses\": \"1. In related work, authors list a lot of recent related work in diffusion models. However, some related work is summarized not very clearly. For example, \\u201cInversion by Direct Iteration (Delbracio & Milanfar, 2023) pursues a simpler form to get rid of the limitations of traditional diffusion.\\u201d this sentence is confusing because it is not clear to me what things the paper tries to simplify and what limitations they are getting rid of. Another issue is that the paper mentions that the recent work in diffusion models try to speed up the denoising process and provide in-depth analysis of diffusion models. However, these are not directly related to the inconsistency problem this paper tries to solve. Therefore, I think the paper should reorganize this section so that the connection and difference between related work and the proposed method is more clear.\\n2. The main advantage of the proposed method as mentioned by the paper is consistent supervision from multiple successful actions (i.e., multi-modality). However, RL Bench demonstrations is not a demonstration dataset that has obvious multi-modality. A recent paper has proposed a dataset benchmark[1] for evaluation of multi-modal behaviors. It would be interesting to see how the proposed methods and the baselines behave in this benchmark\\n3. The proposed method\\u2019s improvement over previous methods on Multi-view is not very significant with 82.3% average success rate compared to RVT2\\u2019s 81.4%. For each task, the proposed method has the highest success rate only in 7 out of 16 tasks. Therefore, it seems that the performance improvement is limited. \\n4. In the results sections, the paper only includes the mean but it is reasonable to also include the std for the success rate as this is usually reported in the other papers.\\n\\n[1] Xiaogang Jia, Denis Blessing, Xinkai Jiang, Moritz Reuss, Atalay Donat, Rudolf Lioutikov, and Gerhard Neumann. Towards diverse behaviors: A benchmark for imitation learning with human demonstrations. In The Twelfth International Conference on Learning Representations, 2024.\", \"questions\": \"1. the paper mentions they use CLIP encoder to extract the embedding from text instructions and image observations. However, it doesn\\u2019t mention how they process the robot state information in their framework. Moreover, if there are multiple views, how do they fuse the embedding from different views? The paper needs to add some clarification for those details.\\n2. For qualitative results, the author only shows the stack blocks tasks. It would be interesting to see more qualitative rollouts of other tasks. The paper mentions the method is good for the tasks that has multiple success actions. However, the failure case it show when compared to 3D Diffusor Actor in Appendix A.4 is not multi-modal actions. To solidify the paper\\u2019s claim, it is better to include some multi-modal actions example and visualize the denoising process.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors argue that the denoising targets of conventional diffusion models are inconsistent, making them unsuitable for robotic manipulation tasks, as they 1) vanish near local optima and 2) are time-varying. To address this, the authors propose the Consistent Iterative Denoising Model (CIDM), which learns from a time-invariant denoising field combined with a radial loss function. In this proposed denoising field and radial loss function, distant GT actions have less influence than closer ones. The authors compare CIDM's performance against state-of-the-art text-conditioned visual robotic manipulation methods, such as 3D Diffuser Actor and RVT2, in the RLBench settings used in PerAct and GNFactor.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Diffusion models for robotic manipulation indeed behave very differently from denoising in the pixel space of conventional diffusion models for image generation. Unlike pixel-space diffusion, where values are confined within a compact [0,1] range, gripper pose space is unbounded. This often causes diffusion models to exhibit underconvergent behavior as illustrated in Figure 4. The proposed method appears to offer some mitigation for this important issue.\", \"weaknesses\": \"### **Weakness 1. Lacking Probabilistic Justification**\\n\\nThe authors argue that the score function of conventional diffusion models being zero at local minima is **biased** and should instead always point toward the nearest target. They state:\\n>\\n> \\\"The first problem is that the score function $\\\\nabla_{y_t} \\\\log p_{t} (y_t)$ is biased as a denoising field... Since the reasonable denoising field always makes noisy action closer to its target successful\\naction...\\\"\\n> \\nHowever, this claim is debatable. I would contend that CIDM itself introduces bias while conventional diffusion models are unbiased. For a model to be unbiased, the denoising field should be almost zero near saddle points, as there\\u2019s no justification for favoring one specific target. In conventional diffusion models, added noise serves to break such ties. Conversely, CIDM imposes a strong preference toward nearby targets among multiple possible answers, making the output highly sensitive to the initial conditions of the denoising process. This arbitrary choice of denoising field introduces bias in CIDM, unless the distribution of initial points is meticulously selected (as in flow-matching models). Alternatively, one could adopt the Annealed Langevin MCMC viewpoint proposed by Song & Ermon (2019). In this case, however, one should carefully choose the form of noise and denoising target so as to guarantee the learned to model to be unbiased. These considerations are not thoroughly addressed in the paper. Consequently, there's no assurance that the samples $y$, generated by CIDM, follow the actual target policy $y\\\\sim p_{data}(y|x)$. \\n\\n### **Weakness 2. Claimed Benefit not Well-supported**\\nAs discussed in Weakness 1, CIDM introduces bias. However, as demonstrated by the Cold Diffusion paper, neural networks can still produce reasonable samples across various corruption processes, even if biased. Thus, bias isn\\u2019t necessarily detrimental when a meaningful trade-off is achieved. However, for CIDM, the specific benefits of this trade-off remain unclear.\\n\\nFirstly, it is questionable whether the issue presented in Figure 4 is due to inconsistent training objectives. Rather, it could be due to the inference-time denoising scheduler. For instance, I observe that increasing the number of denoising iterations or lowering the temperature at smaller noise scales often resolves the underconvergence issue shown in Figure 4. Better denoising strategies, such as DDIM, could also be an option.\\n\\nSecondly, the authors argue that conventional denoising target is difficult to learn, and suggest that CIDM alleviates this issue by using a more consistent target. However, I\\u2019m not convinced that inconsistency is the only factor at play here. The primary issue could instead be the precision of the action. Diffusion models often struggle with generating highly precise actions due to their inherently noisy and complicated denoising pipeline. In contrast, models specifically optimized for precision, like RVT2, outperform CIDM and 3D Diffuser Actor in precision tasks such as block stacking as suggested in the experimental result. If the authors argue that inconsistent denoising targets hinder learning, they should provide evidence that biasing the target with a more consistent approach indeed reduces learning variance, i.e., by showing that CIDM demonstrates improved data efficiency or lower performance variance across different seeds.\\n\\n### **Weakness 3. Insignificant Result**\\nThe experimental results are not significant, as only 25 test episodes were conducted per task. For the 18 tasks in the PerAct setting, this amounts to 450 trials. With CIDM achieving an 82.3% success rate, the 90% confidence interval is 0.78991 \\u2264 p \\u2264 0.85134. Thus, a 1% improvement over state-of-the-art methods like RVT2 and 3D Diffuser Actor does not offer substantial evidence of CIDM\\u2019s superiority.\\n\\nEven if the reported performance gain holds, it does not sufficiently justify the bias introduced by CIDM. For example, if an expert policy selects a red block with 90% probability and a yellow block with 10%, we would expect the learned policy to favor red blocks proportionally. This expectation does not hold for CIDM. Every generative model has precision-diversity trade-off, and the RLBench success rate primarily measures precision over diversity. Therefore, sacrificing sample diversity for only a 1% performance gain does not make a lot of sense for me.\", \"questions\": \"Which architecture did you use for the denoising network? For a fair comparison, it would be helpful to know how the architectures and the number of parameters are controlled across models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
07N9jCfIE4
The Complexity Dynamics of Grokking
[ "Branton DeMoss", "Silvia Sapora", "Jakob Nicolaus Foerster", "Nick Hawes", "Ingmar Posner" ]
We investigate the phenomenon of generalization through the lens of compression. In particular, we study the complexity dynamics of neural networks to explain \emph{grokking}, where networks suddenly transition from memorizing to generalizing solutions long after over-fitting the training data. To this end we introduce a new measure of intrinsic complexity for neural networks based on the theory of Kolmogorov complexity. Tracking this metric throughout network training, we find a consistent pattern in training dynamics, consisting of a rise and fall in complexity. We demonstrate that this corresponds to memorization followed by generalization. Based on insights from rate--distortion theory and the minimum description length principle, we lay out a principled approach to lossy compression of neural networks, and connect our complexity measure to explicit generalization bounds. Based on a careful analysis of information capacity in neural networks, we propose a new regularization method which encourages networks towards low-rank representations by penalizing their spectral entropy, and find that our regularizer outperforms baselines in total compression of the dataset.
[ "Compression", "Complexity", "Generalization", "Grokking", "Minimum Description Length" ]
Reject
https://openreview.net/pdf?id=07N9jCfIE4
https://openreview.net/forum?id=07N9jCfIE4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yN8yMoMpru", "qomO79zPWh", "qPz00tqoWI", "kM3mqID3YS", "hutNOhWRj8", "hk3IMOK09f", "f0da9E0NhM", "dBT3MvsdIG", "c7AEis6UGu", "aUv4xmVQL8", "aLFL9woHwC", "WfHTa4KUYM", "VRP6VogDPn", "SydpkBrEIu", "MXJkQaxuiE", "LLJCOV0mfz", "L5Op0KU2Oq", "GVMGPNhkzr", "GUdtwArD8A", "C0lih08R76", "BTsGSq4YT3", "AhclDjKStb", "8E8d5Rr5ch", "3gTZLiUPG3", "2zzR59oB6G", "0Iv3jIDzuv" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730328643149, 1732479187170, 1731696827357, 1731698788602, 1732448211700, 1732539555929, 1730647411645, 1737523903201, 1731697332484, 1732021480230, 1731697670598, 1732528044374, 1732066003850, 1732535983136, 1734644012614, 1731696439294, 1731698076707, 1732064977462, 1729521956188, 1730391449573, 1732021034023, 1729959056312, 1731698865217, 1731697394290, 1731696586026, 1731697763989 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8354/Reviewer_7uBz" ], [ "ICLR.cc/2025/Conference/Submission8354/Reviewer_7uBz" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Reviewer_n5VB" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Reviewer_pi66" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Reviewer_JG45" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Reviewer_JG45" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Area_Chair_XkvX" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Reviewer_JG45" ], [ "ICLR.cc/2025/Conference/Submission8354/Reviewer_n5VB" ], [ "ICLR.cc/2025/Conference/Submission8354/Reviewer_JG45" ], [ "ICLR.cc/2025/Conference/Submission8354/Reviewer_PJNM" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ], [ "ICLR.cc/2025/Conference/Submission8354/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors introduce a measure of neural networks\\u2019 complexity, and show that grokking could be explained by the rise and fall of the model\\u2019s complexity. The authors also propose methods for compressing neural networks via quantization and spectral entropy-based regularization, and empirically demonstrate their performances with modular arithmetic tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is generally clear, and easy to read and interpret.\", \"The paper provides nice intuitions on building generalizable neural networks, especially from the model complexity perspective.\", \"The paper considers an interesting set of techniques for model compression with minimal performance loss, and tests them with experiments.\"], \"weaknesses\": [\"While the paper considers several promising ideas for model compression, there are a few limitations:\", \"While the complexity explanation of grokking is interesting, it seems to overlap with the circuit efficiency explanation proposed by Varma et al. (2023). Although the authors acknowledge that model complexity is not exactly identical to efficiency or parameter norms, the added insights in this area feel somewhat limited.\", \"The proposed model compression methods are quite similar to existing techniques on quantization and low-rank approximations, which raises questions about the novelty of the approach. Spectral entropy-based regularization is an interesting idea, but concerns about potential computational overhead and their applicability in more complex settings remain.\", \"Lastly, the applicability of entropy regularization techniques in more complex problems beyond the modular arithmetic task raises some concerns. Additional evidence or analysis demonstrating how this technique can advance the complexity-performance Pareto frontier in more difficult tasks will strengthen the paper.\"], \"questions\": \"1. How did you set the learning rates for experiments? Does the performance of entropy regularization vary with different learning rates?\\n2. While entropy regularization surely helps in compressing the model, I expect that both the usual L2 regularization and the entropy regularization will achieve perfect test accuracy. Could you think of a scenario where the proposed regularization technique offers a clear performance advantage over L2 regularization?\\n3. Will entropy regularization also help in training larger models with more complicated datasets, where they often do not have simple representations as one-dimensional numbers?\\n4. Could the computational overhead of low-rank optimization become significant, especially when applied to large models? If so, how could we mitigate them?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you so much for clarifying the paper! I completely agree that understanding generalization and its relationship to complexity is a crucial question in machine learning. The finding that the memorization-to-generalization transition aligns with a drop in complexity is encouraging, as it verifies our understanding of overfitting within the community.\\n\\nThat said, I feel this explanation might not fully capture the essence of \\\"grokking.\\\" The rise and fall of complexity seem to almost directly follow from how memorization and generalization are defined. This leaves some critical questions still open, such as:\\n(a) When does grokking occur?\\n(b) How do hyperparameters like learning rate and dataset size influence the memorization-generalization transition?\\n(c) What are the competing mechanisms that drive a network toward either memorization or generalization?\\n\\nI would love to hear your thoughts on these aspects as well.\"}", "{\"comment\": \"Thank you for your review.\\n\\n**Chosen Datasets**: We intend to study more complex datasets in our next work, but we have kept this paper focused on the original grokking tasks (without making any changes), to maintain clarity and comparability with the key prior work. \\n\\n\\n**Discussion of What Constitutes an Explanation**: It\\u2019s an interesting philosophical question whether this constitutes an \\u201cexplanation\\u201d of grokking, as you mention! What might one mean by an explanation? If we use different networks, with different activation functions, different topologies, different regularizers, the learned representations will, in all likelihood, be different. That is, the microscopic dynamics/representation of the network weights might not be the appropriate level of description, and so might not constitute an \\u201cexplanation\\u201d. The point here is that some abstraction emerges in the network which lets it generalize. To use an analogy: the dynamics of both water and air are governed by the hydrodynamic equations (they follow the same emergent macroscopic phenomenon), but their microscopic components are completely different (H20 vs a mix of gases). What constitutes an \\u201cexplanation\\u201d of their behavior? Does one want to appeal to the microscopic dynamics (quantum mechanics), or is it enough to show that the same abstraction emerges? Our argument in this work is that we can actually ignore the microscopic weight structures, and *explicitly* bound the generalization performance (which is what one generally cares about in ML) using the complexity, which is properly thought of as a macroscopic phenomenon in the context of coarse-graining. Ultimately, it will be up to the community to decide what constitutes an explanation for grokking.\\n\\n**RE: Bzip, and compression time**: Yes, it\\u2019s true these are not ideal. One could imagine some specialized compression scheme for weight matrices performing better here. However, the point of this work was not to produce the tightest possible complexity estimates at all costs, but to develop the conceptual framework and give a simple implementation. For our purposes, a simple off-the-shelf compressor like bzip2 is sufficient. We also experimented with gzip for the final compression step, which produced similar results with slightly worse compression ratios. When understanding the exact generalization performance is necessary for critical systems, one would want to use the best possible compression scheme within one\\u2019s compute budget.\\n\\n**Followup Work**: Now that we have this complexity measure, we are producing follow-up work applying these insights to other domains and scaling our method up. We\\u2019re excited to share these results with the community, but this first paper laying out the conceptual framework and demonstrating it on the grokking tasks is, in our view, the appropriate first step.\\n\\n**Additions to updated draft**: We have added a number of plots to the updated version of the paper. In particular we would like to point out the plots of the rate\\u2013distortion curves for the different regularization methods. You can see that our method Pareto-dominates weight decay at every distortion level, indicating the strong performance of our regularizer. Furthermore, we\\u2019ve added complexity plots for the unregularized network, so that you can see the complexity dynamics in the case where generalization does not occur. Finally, we also added plots showing the effective rank of the networks with different regularizers, which demonstrates how our spectral entropy regularizer enforces an effective low-rank penalty, helping us outperform weight decay in complexity.\"}", "{\"title\": \"Response 1.2\", \"comment\": \"> For instance, using the actual test accuracy would be very informative, to see whether the proposed regularization leads to better performance.\\n\\nAs we mentioned above, and in the paper, all regularized nets achieve perfect test accuracy, so there is no value in comparing the test accuracy.\\n\\n__\\n\\n> Does the generalization bound of Equation (4) only hold for finite hypothesis spaces? If yes is that a realistic assumption in practical learning settings? Moreover, could you be more precise as to why the choice of Solomonoff prior should lead to tighter bounds than other priors, such as the uniform prior over H?\\n\\nThe generalization bound of Equation 4 comes from Lotfi et al [4]. The Solomonoff prior is defined over all finite strings considered as programs. This is the most generic possible assumption that one could make for a computer model of some data.\\n\\nThe question of why the Solomonoff prior is a good one, is very deep and interesting. There is ongoing research into the apparent simplicity bias found in nature [1, 2]. It appears to be the case that nature simply has a bias towards simpler structures, hence the Solomonoff prior is superior to a uniform prior. The ultimate nature of why this is the case is not yet clear.\\n\\n__\\n\\n> Line 181: Why can the empirical risk be understood as the entropy of the data under the model? Is there a way to formalize this fact?\\n\\nLotfi et al discuss this in their work which produces the bound we use. We refer you to their work to understand the nuances of the finite hypothesis bounds. In particular, they show how to adapt entropy measures for the risk (they have to make some small changes to ensure the entropy stays bounded).\\n\\nIn practice, one can take the risk to be the cross-entropy loss used in training. This is the intriguing link between compression and generalization: both the MDL principle and generalization bounds like Equation 4 suggest that we should take the model which minimizes the sum of data entropy and model complexity. The model which compresses the data best is the one which generalizes best. This deep fact underpins our work.\\n\\n__\\n\\n> Is it possible to obtain a formal statement relating the information capacity (Equation (9)) to generalization?\\n\\nThe information capacity can be seen as the largest upper-bound on the model complexity. That is, the model complexity can be no greater than its information capacity. For example, a model might have 100 parameters of 10 bytes each (total 10 bytes times 100 = 1KB). However, imagine all the parameters are zero (or 1, or any constant). That model is very simple, its complexity is low. We can also imagine that each of the 100 parameters is as complex as possible (e.g. uniform random), and that there is no discernable pattern in the parameters considered as a whole: the complexity of this model is large, but it is certainly no larger than the total capacity of the model (1KB). So yes, the capacity bounds the complexity, but it is the loosest possible bound. However, one can see that an effective model compression scheme might be to try distilling a larger model into a smaller one: then if the capacity of the smaller model is low enough, we may be able to produce quite tight complexity bounds.\\n\\n__\\n\\n> To what size and precision do the parameters \\u03bb and \\u03b4 (Section 4) refer to in practice?\\n\\nIt depends on model representation specifics. E.g. models trained in float32 vs float16 will have different effective max ranges and precisions, and so on. We control the max size \\\\lambda through weight decay, and the precision \\\\delta through the noisy weight scheme which we discuss in section 4.1\\n\\n__\\n\\n> How would the training accuracy be affected by the addition of Gaussian noise in practical deep learning settings?\\n\\nThere is no general answer to this question, as it can depend on the specifics of the data, model, loss function, and so on. However, adding noise to the weights is an effective regularization scheme. We cited [5] in our discussion of this scheme, and recommend it for further insight on using noisy weights as a regularization method.\\n\\n__\\n\\n> Can you define more precisely the notations used in Algorithm 2, such as BO.SUGGESTPARAMETERS()? More generally, can you provide more details on the Bayesian optimization procedure?\\n\\nBayesian optimization is a generic black-box optimization procedure. Say you have input-output access to some function f(x), and you want to find its maximum value. Bayesian optimization is a generic method which receives a history of (x, f(x)) pairs, and suggests a new x each time by refining a model of the function under evaluation. In our case, we are trying to minimize the compressed size of the network, so we maximize -compressed_size. Our inputs parameters are the quantization level \\\\Delta and the parameter \\\\tau introduced in section 4.2 which controls the degree of rank-decomposition. We used a standard python BO, found [here](https://github.com/bayesian-optimization/BayesianOptimization).\"}", "{\"comment\": \"Thanks for the comments, in particular in terms of what you call an explanation. I remain enthusiastic about this paper.\"}", "{\"comment\": \"**RE: The permutation question:** When we invoke the Kolmogorov complexity, we are implicitly choosing a reference Universal Turing Machine (UTM), i.e. a computing environment/programming language. This in some sense is equivalent to \\\"choosing a prior\\\" for doing Bayesian probability. So if you replace $h$ by $\\\\sigma(h)$, you **must** keep track of how the permutation changes the complexity.\\n\\n**RE: The normalization of the Solomonoff Prior**: Does our answer above also help to clarify this point? As we mentioned, when considering finite strings like we do in this work, one should be clear about the reference UTM, as it affects the effective \\\"prior\\\".\\n\\n**RE: Precision**: The precision is, as we mentioned, the \\\"effective quantization\\\" level. If we want to be completely mathematical about it: we are defining our parameters over $\\\\mathbb{R}$, and a precision map for fixed-point arithmetic with $n$ bits after the decimal point defines an equivalence class from $\\\\mathbb{R} \\\\rightarrow \\\\mathbb{Z}/(2^n)\\\\mathbb{Z}$. I.e. the precision is a quotient set where numbers differing by less than $2^{-n}$ are considered equivalent. We consider this formalism to be cumbersome for the clarity of our paper, and prefer to leave the wording as \\\"precision\\\".\"}", "{\"summary\": \"The authors introduce a new complexity measure for neural networks and claim that this complexity measure can be used to explain 'grokking'. \\\"Grokking\\\" in machine learning is this idea that neural networks suddenly transition from memorization to generalization long after overfitting the training data. They show that their complexity measure correlates with this 'grokking' and then show how this complexity measure can be used to define a new regularization method which encourages low-rank representations. This regularizer is defined using the spectral entropy of the network weights.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Understanding the role of model complexity and how it should be measured is an important question in machine learning. This paper takes a good step in this direction and presents a compelling case for a complexity measure which is defined using the minimum description length and ideas from compression and information theory. The paper contributes to a deeper understanding of this 'grokking' phenomenon, which has gotten significant attention in recent years.\\n\\nThe paper has good theoretical motivation and makes an interesting connection with the concept of grokking in machine learning. Their intrinsic complexity measure and regularization technique are well-grounded in theoretical concepts from information theory. The authors provide clear explanations and justifications for their design choices.\\n\\nThe paper is logically structured and well-written and supports their theoretical claims with experiments on synthetic tasks, like modular arithmetic, for decoder transformer models.\", \"weaknesses\": \"The complexity measure defined and explored in this paper is positioned as a way to 'explain grokking'.\\n\\nComparison with other complexity measures. The empirical results in the paper are nice. But it would be good to have a fair comparison of how other complexity measures look when measure in the same scenarios. It's unfair to say that this new complexity measure \\\"explains\\\" grokking without uncovering a scenario where this complexity measure is able to capture this behavior where others are not. Otherwise, it's unclear if this is just a correlation relationship with the perceived behavior of 'grokking'. \\n\\nLacking discussion of the cost for computing this complexity measure. If I understand correctly, the proposed complexity measure involves a Bayesian optimization procedure for finding the optimal compression parameters, which could be computationally expensive. It would be nice to address or (ideally) investigating how difficult this measure is. This would enhance the practicality of the approach.\\n\\nFrom what I understand, this complexity measure is somewhat dependent on the hyperparameters, in particular the per-layer truncation threshold $\\\\kappa(\\\\tau)$. It would be nice ot have a detailed analysis even experimentally of the sensitivity to this threshold.\\n\\nThis paper has some very nice ideas and is worth exploring but it would be good to have a section on Limitations of their approach with an honest assessment in terms of other complexity measures and the degree to which the results are not just correlational with this 'grokking' behavior. \\n\\nThe paper is carefully written and has a nice re-cap of the relevant ideas from information theory and compression in ML. However, the main message of the paper was at times hard to find. For example, what is the exact definition of this new complexity? I understand it relies on coarse-graning of the network and compression using bzip2 and I think the size of the compressed network is the proxy for the complexity. Is that the definition? This paper would benefit from more clear exposition in this respect.\", \"questions\": [\"What is the exact definition of the novel complexity measure introduced in this paper? And for which models is this measure well-dfined. The related conversation about compression and motivation from information theory and Kolmogorov complexity is very nice but it's unclear to me exactly how this measure is defined. Is this the content of Algorithm 2? Does the output of Algorithm 2 define the complexity measure?\", \"in line 400, can you clarify which subset of grokking experiments you used. And why you used this subset.\", \"in line 358 you state \\\"..we show that regularizing the spectral entropy leads to grokking..\\\" Is this an overstatement? How exactly is grokking defined quantitatively?\", \"In Figure 3, you compare your regularization technique with weight decay. What is the dependence of the proposed spectral entropy regularization on the regularization weight? What behavior do you notice as you apply more or less spectral regularization? It would be nice to see the effect as the regularizaiton of the spectral entropy gradually increases.\", \"Does Figure 4 include multiple seeds? Why are error bars not visible in this plot?\", \"typos/nits\", \"in Figure 2. Why include the \\\"ours\\\" distinction when all plots are \\\"ours\\\".\", \"line 372, \\\"ideas\\\" to \\\"ideal\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response 1.1\", \"comment\": \"> While the complexity explanation of grokking is interesting, it seems to overlap with the circuit efficiency explanation proposed by Varma et al. (2023). Although the authors acknowledge that model complexity is not exactly identical to efficiency or parameter norms, the added insights in this area feel somewhat limited.\\n\\n**Formal Complexity Measure**: We strongly disagree that our added insights are limited. We were indeed inspired by Varma et al to study network complexity. The issue is that the L2 norm, which they appeal to to explain grokking, is not a complexity measure. This is a widespread misunderstanding in the ML community, which has led to much confusion. We aim to clarify that confusion with this work. Aaronson et al connect effective complexity with coarse-graining, but give no formal justification for this connection. We have formalized Aaronson et al\\u2019s insight using algorithmic rate\\u2013distortion theory, and then demonstrated the effectiveness of our proper universal complexity measure by applying it to explain grokking in terms of the fundamental information content inside a network. We believe our work has far-reaching implications for understanding generalization, model compression, and quantization schemes. Understanding the nature of generalization and its relationship to complexity is a question of **central importance** in machine learning.\\n\\n__\\n\\n> The proposed model compression methods are quite similar to existing techniques on quantization and low-rank approximations, which raises questions about the novelty of the approach. Spectral entropy-based regularization is an interesting idea, but concerns about potential computational overhead and their applicability in more complex settings remain.\\n\\n**Relationship to prior work**: It is true that others have studied quantization and low-rank approximation in machine learning. These are topics of fundamental importance, and we make no claim whatsoever about our use of these elements being novel per se. Our contribution is to connect these fundamental ideas with another: complexity. Why can models be quantized to different degrees? Why can they sometimes be distilled into smaller networks? Our work takes a key step toward clarifying these questions by illuminating the relationship of these basic ideas with complexity and generalization. **Can you be more specific about what concerns you have about the spectral regularization method being applied in \\u201cmore complex settings\\u201d?** \\n\\n**Followup work**: Now that we have this complexity measure, we are producing follow-up work applying these insights to other domains and scaling our method up. We\\u2019re excited to share these results with the community, but this first paper laying out the conceptual framework and demonstrating it on the grokking tasks is, in our view, the appropriate first step.\\n\\n__\\n\\n> Lastly, the applicability of entropy regularization techniques in more complex problems beyond the modular arithmetic task raises some concerns. Additional evidence or analysis demonstrating how this technique can advance the complexity-performance Pareto frontier in more difficult tasks will strengthen the paper.\\n\\n**Pareto frontier**: Because we are introducing a lot of conceptually new pieces in this work, we want to stay completely focused on the clearest possible example of complexity dynamics and generalization: grokking. However, we absolutely agree regarding the question of a Pareto frontier. We have updated the paper to include a plot (Fig 3) showing how the complexity of the models differ at different distortion levels, and find that our method represents a Pareto improvement over weight decay.\\n\\n__\\n\\n> How did you set the learning rates for experiments? Does the performance of entropy regularization vary with different learning rates?\\n\\nAs mentioned in the experiments section of the paper, we use exactly the same settings as the original grokking work, to avoid any additional complication or changes. Because of this, we only used the default rate of 1e-3. We have added a hyperparameter table to the appendix.\"}", "{\"title\": \"response to 1.2 and 1.3\", \"comment\": \"Thank you for acknowledging that the reasons why the Solomonoff prior could be a good choice are not yet clear. I think it would be very beneficial to include such a discussion in the paper (maybe in the appendix?) as it is a basic building block of your theory.\\n\\nRegarding my question on the permutation of the hypothesis, my point is that there seems to be a lot of different way to define \\\"Kolmogorov-inspired\\\" complexity measures that could be used in the generalization bound (for instance the composition of Kolmogorov complexity and any permutation seems to work). Therefore, why is Kolmogorov complexity the best choice? More generally, this raises the question of whether Kolmogorov complexity is the best theoretical explanation of your experimental (compression-based) results.\"}", "{\"title\": \"Response 1.1\", \"comment\": \"Thanks for your review.\\n\\n**Key Motivation**: The motivation is to understand the nature of complexity and generalization better. Why and when do neural networks generalize? How can we know if they\\u2019ve learned a good explanation? What kinds of abstractions emerge in the networks? Do they learn complicated explanations, or simple ones?\\nThe grokking phenomenon is the clearest example we know of where during training the networks undergo a clear phase transition between memorization and generalization, so it is a perfect test-case to understand the relationship between generalization and complexity. Our results demonstrate that networks are highly complex when they have simply memorized their training data, and that when they generalize, they become much simpler. Our complexity dynamics plots demonstrate this transition from high to low complexity clearly.\\n\\n**Comparison with previous work**: As discussed in the related work section, Liu et al produce a complexity proxy, but their measure cannot be used to construct a generalization bound. They do not prove that their measure guarantees generalization behavior. The same is true of Humayun et al. In contrast, our method bounds the Kolmogorov complexity, which results in an explicit generalization bound in turn. Furthermore, our construction is universal, and can be applied to any kind of parameterized model.\", \"re\": \"Del\\u00e9tang et al, this work shares our view of sequence modeling as compression, but does not study grokking. Their work is focused on dataset compression with LLMs, but not on the model complexity. Our work demonstrates that model complexity must be considered jointly with data compression to understand generalization.\\n\\n__\\n\\n> Could you elaborate on the comparison with bzip2? What is being compressed, problem setup, compressed file size, etc.?\\n\\n**Relationship to zip**: It turns out that all else being equal, more compressible models provably generalize better, so there is a deep connection between how compressible a model is and how well it generalizes. Ultimately, one can bound the complexity of the network by its information content (e.g. its filesize). Hence, we want to know: how much can we compress this model? If we simply try to zip the weights, we don\\u2019t achieve any meaningful compression because of the random information in the network, so we don\\u2019t get insight into the model complexity. In this work, we presented a way to get rid of the noise in the weights, and we made this procedure formal using rate\\u2013distortion theory, which is the same theory that underlies related compression schemes, such as JPEG. So in our work, we give a formal theory of network compression, and connect that theory to equations which tell us how well a network will perform on unseen examples (its generalization performance). Our compression scheme has multiple steps to get rid of noise (quantization, low rank approximation), and the final step is to zip the de-noised weights, to get a complexity measure in bytes. A proper complexity measure like ours allows us to give an explicit generalization bound which is universal.\\n\\n__\\n\\n> The papers claim a 30-40x improvement in compression ratio, but I did not find and details or data.\\n\\nWe have added the naive bzip2 filesizes of the networks, shown in Figure 9 in the appendix. Note the y-axis scale in comparison to Figs 1 and 2. The final complexity of the regularized networks as measured by our method is 30-40x smaller than with naive bzip2.\"}", "{\"title\": \"Thank you for your answer\", \"comment\": \"Thank you for clarifying the discussion around information capacity, I think it can help a lot. In particular, I would like the term \\\"precision\\\" to be defined formally.\\n\\nI also thank the authors for clarifying some points related to Kolmogorov complexity and Solomonoff prior.\", \"i_still_have_a_few_questions_regarding_this_aspect\": [\"1. You mention that Kolmogorov complexity is universal up to a constant, but I guess the prior is then normalised for the prior to be a probability distribution, so the constant does not really matter right? (which can be a good thing actually)\", \"2. It is clearly not central to the paper but I think I misexplained my permutation argument: I didn't mean to consider the complexity associated to a permutation, but rather that if $\\\\sigma: \\\\mathcal{H} \\\\longrightarrow \\\\mathcal{H}$ is any permutation of the hypothesis set $\\\\mathcal{H}$ (ie, the prior is a distribution on $\\\\mathcal{H}$), then $K(h)$ can be replaced by $K(\\\\sigma(h))$ in Equation (4) (if I am correct).\", \"I would like to thank the authors for taking time to clarify several of my concerns. While I believe that some work remains to be done (especially regarding clarity and formal definitions of the various introduced quantities), I will increase my score to 6 (marginally above the threshold).\"]}", "{\"comment\": \"> Thank you for acknowledging that the reasons why the Solomonoff prior could be a good choice are not yet clear.\\n\\nIn our previous comment on the ongoing exploration of Solomonoff priors in nature, we merely meant to point out that connecting the prior with natural phenomena is ongoing. The theoretical reasons to expect the Solomonoff prior to hold are clear, and well-established: The Solomonoff prior is the appropriate universal prior that encodes a simplicity bias. \\n\\nIf one expects that simpler objects are more likely to occur than complex objects (where complexity is the minimum description length on a Turing machine), the Solomonoff prior exactly captures this notion, and is well-studied in Algorithmic Information Theory. In fact, the Solomonoff prior was used by Hutter [1] to construct a theoretically perfect (but uncomputable) AI agent.\\n\\nWe will include an expanded discussion of the Solomonoff prior, and concepts from algorithmic information theory more generally, in the Appendix. Thank you for this suggestion.\\n\\n> Regarding my question on the permutation of the hypothesis, my point is that there seems to be a lot of different way to define \\\"Kolmogorov-inspired\\\" complexity measures that could be used in the generalization bound (for instance the composition of Kolmogorov complexity and any permutation seems to work).\\n\\nThis is a subtlety of Kolmogorov complexity which is a bit tricky to explain (the results of this discussion will go in the Appendix, so please let us know if this explanation clarifies the issue for you).\\n\\nFirst, we should more precisely state that we are bounding the K complexity of these parameters *relative to a given computing environment*, that is, K(\\\\theta) in our paper should more precisely be stated as K(\\\\theta | Python). That being said, because any Turing-complete language (e.g. Python, C, etc...) can simulate any other Turing-machine, with only a *finite, constant* cost to switch between languages (the length of the interpreter from one language to another), the Kolmogorov complexity is, in fact, universal, up to said fixed constant (the Kolmogorov Complexity Invariance Theorem). Does this clarify the issue? [These slides](https://users.cs.duke.edu/~reif/courses/complectures/Li/KC-Lecture1.pdf) provide a decent introduction into K complexity. \\n\\nThe compression provides an upper-bound on the Kolmogorov complexity, which we can straightforwardly connect to generalization through the bound. This gives a connection between the compressed size of the model, and a generalization bound, which is what we want. We think this understanding that the compressibility of the model controlling the generalization performance is under-studied, and here we are showing how adopting that perspective can explain what is going on with grokking: by looking at how compressible the model is at each step of training, we get a very precise picture of its complexity, which corresponds exactly with its generalization performance!\\n\\nThe permutation you're suggesting can add either very little complexity (null permutation), or can add maximal complexity (consider a permutation map \\\\sigma which has no description shorter than writing the entire permutation out), effectively a \\\"random\\\" permutation. Consider the shortest description of your \\\\sigma as a program -- composing the network weights with \\\\sigma adds K(\\\\sigma) complexity to the permuted weights. Does that help clarify the issue?\\n\\n[1]: Hutter, Marcus. \\u201cA Theory of Universal Artificial Intelligence based on Algorithmic Complexity.\\u201d ArXiv cs.AI/0004001 (2000)\"}", "{\"comment\": \"Thanks for your response.\\n\\n> This leaves some critical questions still open, such as: [...]\\n\\nThese questions are what we would call \\\"phenomenological\\\" ones, and have been explored in [1]. The answers to these questions will depend on the specific setup such as model architecture, dataset, optimizer, and so on. The theory of *why* generalization occurs is more fundamental than the phenomenology, and as we have demonstrated in this work, is directly related to complexity. If one knows the complexity, one can produce generalization predictions *without access to a test set*, using the bounds we provide alone.\\n\\n> The rise and fall of complexity seem to almost directly follow from how memorization and generalization are defined.\\n\\nWhat makes you say that? No one has yet demonstrated the rise and fall of complexity in these models, because a universal complexity measure has so far been unavailable. While we agree that it is intuitive, having a formal theory is incredibly important for making progress. For example, consider the \\\"Value Equivalence Principle\\\" [2] in model based reinforcement learning. It states that an optimal \\\"world model\\\" for decision making is in fact *incomplete*, and only models the world up to variations that influence the value function. We do not want to model details of the environment which will never affect our decision making. Using the lossy compression framework we have developed in this work, one can understand the Value Equivalence Principle as exactly our algorithmic rate--distortion theory, which uses the *value function* as the distortion function. Furthermore, because of the connection we make between generalization and complexity, this provides another formal justification for why agents acting under the VEP might generalize *better* than agents which model the world in full detail. Simpler explanations generalize better!\\n\\nWe would like to point out that this is the **learning theory** track of the conference. We have demonstrated a new approach to understanding model generalization through a **universal** complexity measure which is based on lossy compression. We hope you agree that the community will be interested in this perspective. If not, could you please explain why not?\\n\\n\\n[1]: Liu et al. Towards Understanding Grokking: An Effective Theory of Representation Learning. NeurIPS 2022.\\n\\n[2]: Grimm et al. The Value Equivalence Principle for Model-Based Reinforcement Learning. NeurIPS 2020.\"}", "{\"metareview\": \"**Summary of Discussion:**\\nThe paper introduces a novel complexity measure to study grokking dynamics and explores its implications for regularization via spectral entropy. The reviewers were divided, with some appreciating the theoretical angle while others highlighted critical gaps. \\n\\n**Key Concerns:** \\n\\n1. **Experimental Scope and Generality:** \\n - The experiments were confined to modulo arithmetic, limiting the generality of findings. \\n - The lack of validation on other tasks where grokking has been observed undermines the universality of claims. \\n\\n2. **Practical Impact of Regularization:** \\n - While the proposed regularization aligns with the theoretical framework, it did not demonstrate notable practical benefits over weight decay. \\n - The authors\\u2019 acknowledgment that performance gains were not a focus raised questions about the practical utility of the measure. \\n\\n3. **Clarity and Theoretical Justification:** \\n - Some core notions, such as capacity and distortion, lacked formal definitions, leaving room for interpretation and misunderstanding. \\n - The theoretical connection to Kolmogorov complexity, while promising, would benefit from greater formal rigor and explanation. \\n\\n**Conclusion:** \\nThe work provides an intriguing perspective on grokking but falls short on broader applicability and clarity. With expanded experiments across diverse tasks, more explicit practical implications, and clearer theoretical exposition, this paper could better meet the community's expectations.\", \"additional_comments_on_reviewer_discussion\": \"See above\"}", "{\"title\": \"Response 1.1\", \"comment\": \"Thanks for your review.\\n\\n**Comparison with other complexity measures**: The most commonly used proxy of complexity in the machine learning community is the L2 norm (followed by parameter count) [1,2]. As we discuss in the paper in the Related Work and elsewhere, the L2 norm is not a proper complexity metric, and its use as a complexity measure has led to a substantial amount of confusion. Indeed, one cannot construct a generalization bound from the L2 norm alone. To see this, note that a network can be re-scaled arbitrarily: We can multiply all our weights by an arbitrarily large or small constant. If a network\\u2019s weight are all 10^5 or all 10^-5, the L2 norm reports vastly different \\u201ccomplexities\\u201d, whereas both are in fact simple. A true measure of complexity must have units of information.\\n\\nOn the other hand, Kolmogorov complexity is universal, and can be explicitly connected with generalization, as we show in Equation 4. \\n\\nIn a few very specific cases of particular statistical hypothesis classes, one can construct alternative complexity measures. However, these measures simply do not apply to generic neural networks, and so have no relevance in this setting. In addition to demonstrating the complexity dynamics that occur during grokking, we are trying to clarify the situation regarding model complexity by giving an explicit upper bound on a universal complexity measure via compression.\\n\\n**Correlation vs Causation**: You point out that it is unclear whether our measure only \\u201ccorrelates\\u201d with generalization, and that seeing it compared to other measures would help. In fact, this is the point of Equation 4: it guarantees a bounding relationship between the Kolmogorov complexity and the generalization performance. In related works like [1], they cannot guarantee any generalization performance\\u2013this is because their measures are not true complexity measures.\\n\\n**Cost of Computing Complexity**: We use a Bayesian optimizer to search for coarse-graining settings (ways of reducing the information content of the network) which achieve the same performance as the original network. Algorithm 1 shows this procedure. The number of Bayesian optimization steps is a hyperparameter. In our experiments, we set it to 50, which is relatively modest. This parameter could be changed to support various compute budgets.\", \"in_this_work_we_are_not_concerned_with_the_computational_cost_of_the_complexity_estimate\": \"the networks and datasets are not very large. The central goal is to understand the complexity dynamics in depth. The complexity estimation budget will vary depending on the goal of the work. Here we want to get a good estimate of the complexity at every step. In many cases, practitioners will only want to know the complexity of the final model, and can perform this step only one time, after training.\\n\\n__\\n\\n> From what I understand, this complexity measure is somewhat dependent on the hyperparameters, in particular the per-layer truncation threshold \\u03ba(\\u03c4)\", \"this_is_not_correct\": \"k(\\\\tau) is not a hyperparameter. The Bayesian optimization procedure searches for values of k(\\\\tau) at each step, to produce the tightest complexity estimate within its compute budget, as mentioned above. k(\\\\tau) is merely a way to allow for different degrees of rank decomposition per layer.\\n\\n> What is the exact definition of the novel complexity measure introduced in this paper?\\n\\nThe measure of complexity we use is given in Equation 5, the algorithmic rate\\u2013distortion function. It returns the Kolmogorov complexity of the coarse model which satisfies the distortion bound.\\n\\n> For which models is this complexity measure well defined?\\n\\nThis complexity measure is well-defined for all possible models. Kolmogorov complexity is universal.\\n\\n> in line 400, can you clarify which subset of grokking experiments you used. And why you used this subset.\\n\\nWe have expanded the discussion in the experiments section. For simplicity, we chose the first 4 grokking experiments reported in the original grokking paper [3], and which are also studied in [1]. These tasks are the best studied, but we had no other particular reason for choosing them.\", \"we_view_this_simplicity_as_a_virtue\": \"complexity is a difficult and subtle concept, as this discussion shows. We choose to study complexity in the simplest possible setting to clarify its nature. The original grokking paper [3] studies 12 different simple algorithmic tasks made of binary operations. [1] reduce this to a subset of 9 of the original tasks, although they increase the size of the prime field from 97 to 113. We have expanded our discussion of these settings and added a hyperparameter table to the appendix to more clearly explain our experimental setup.\"}", "{\"title\": \"Response 1.1\", \"comment\": \"Thank you for your review.\\n\\n>Several notions are mentioned repeatedly but without being formally defined, such as capacity, distortion or (\\u03bb,\\u03b4) (Equation (9)). It would improve the paper to include additional theoretical background and more formal definitions.\\n\\n**Formal definitions**: We formally defined all of these. The distortion is defined in Equation 6: it is the absolute difference in loss between the original and coarse-grained weights on the data. \\\\lambda and \\\\delta are defined just before Equation 9. They are the max size and precision of a single parameter. Imagine a parameter which can take values as large as 100 (size), but only in multiples of 10 (precision). Then its information capacity is log(100/10) = log(10) = log(number of bins).\\n\\n__\\n> It should be made clearer how the quantities introduced in Sections 3.1 and 4 are related to generalization. For instance, is it possible to write down a theorem with explicit dependence on these quantities, or are their consideration partially based on intuitions? Can the link of these quantities with Kolmogorov complexity be made more formal?\\n\\nThe algorithmic rate\\u2013distortion function returns the Kolmogorov complexity of the coarse-grained model at the distortion level \\\\epsilon, under the distortion function. This means that the generalization performance can be bounded by Equation 4, which links Kolmogorov complexity with expected risk. The dependence is explicit, not based on intuition. In the revised paper we have added a plot of the rate\\u2013distortion curve (Fig 3), so that you can see the complexity levels K at different levels of distortion \\\\epsilon. This plot shows that our method Pareto-dominates weight decay at all distortion levels. That is, our method results in more compressible models compared to weight decay at every distortion level.\\n\\n__\\n\\n> Despite the lack of formal theorems and proofs, the experiments are done on very simple arithmetic tasks. Therefore, it is not clear (neither theoretically nor empirically) whether the results may be generalized to more complex settings. I think that at least one experiment on a small dataset like MNIST or CIFAR10 could improve the paper.\\n\\nAs mentioned above, our bounds are explicit *and universal*, unlike other complexity proxies like L2 norm. It is not a limitation of our work that we test on \\u201csimple tasks\\u201d, it is a choice. We are trying to explain grokking, which was originally demonstrated on these modular arithmetic tasks [3]. They are an excellent test-case because they provide a clear, delayed transition from memorization to perfect generalization. This is a theory paper. Not every work benefits from scale: some ideas are best demonstrated in simple settings. While we intend to scale up our work, and have already begun follow-up work which does this, here we are focused on the basic science of learning, complexity, and generalization.\\n\\n__\\n\\n> It would be useful to include an experiment comparing the performance (in terms of accuracy) with and without the proposed regularization scheme. Indeed, we see that it reduces the MDL and the generalization bound, but, if I am correct, it is not clear whether it achieves better performance overall.\\n\\nThis is already included in the original draft\\u2019s appendix, in the final figure. As mentioned, all regularized models grok (that is, they transition from memorization (100% train accuracy, low test accuracy) to generalization (perfect test accuracy)), and the unregularized models remain in the memorization phase. The full train and test accuracy plots are shown in the final figure, in the Appendix, which we referenced in the Experiments section. There is no performance difference whatsoever between weight decay and our method, since both generalize perfectly. This is not the point of our work. Since the train and test entropy are effectively zero, the model complexity dominates the total description length, and so our method (which drives learning towards less complex structures) achieves a better total compression of the dataset, as demonstrated in the total description length plot.\\n\\n__\\n> We see in Figure 4 that the proposed regularization scheme achieves the lowest complexity. However, the complexity is computed by Algorithm 2 and the proposed regularization is precisely penalizing the quantity computed by algorithm 2. Therefore it does not seem surprising that it is the lowest. As an ablation study, it would be interesting to make the comparison using other complexity notions. \\n\\nIt\\u2019s the other way around: because our complexity measure is a proper complexity metric, we want to optimize it directly. We cannot, though, since the final step is not differentiable (zipping the weights). The spectral entropy penalty encourages the network to have low effective rank, but like L2, or any other information capacity proxy, it is not a complexity metric. It is only one way the network can be complex, but there are many such ways (all possible representation spaces).\"}", "{\"comment\": \"Thanks for your responses!\\n\\nWe will add some clarifying language around the discussion of the information capacity. Would it come across more clearly if we simply say the parameters are discrete, with max size \\\\lambda and precision \\\\delta? This might make it more clear how the parameters carry the complexity/information content of the model.\\n\\n> Would it be interesting to perform an experiment in a setting where the generalization is not perfect? (to see whether the new regularisation improves the generalization, which might have impact in practical settings).\\n\\nAbsolutely, and we are already doing this in some followup work. Since we're both introducing a new theoretical framework of complexity, as well as explaining a well-known phenomenon (grokking) in this work, we feel the paper's clarity is best served by remaining focused on grokking. \\n\\nThe new regularization method here is *not* our main focus. Though we use it to produce highly compressible networks, the main focus is on clarifying the relationship between complexity and generalization.\"}", "{\"summary\": \"This paper studies the grokking phenomenon through compression-based approaches. Inspired by recent work on the intrinsic complexity of neural networks, and combining it with ideas from rate-distortion, quantization and low-rank approximation, the authors propose a new measure of neural networks complexity, which consists essentially of a coarse-graining procedure. They conduct experiments on simple arithmetic tasks which demonstrate that the rise and fall of this complexity might be predictive of the network starting to generalize. Moreover, this leads them to propose a new regularization scheme, based on spectral entropy, whose effect seems to reduce the total description length and the generalization bound, compared to other methods. This might lead to non-vacuous generalization bounds.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Grokking is an important topic for the community\", \"The experiments suggest that the proposed regularization technique based on spectral entropy may induce grokking, which may be of practical interest.\", \"The experiments suggest that the rise and fall of the proposed complexity seems to be predictive of when the model starts to generalize.\", \"The proposed regularization techniques lead to better generalization bounds than classical weight decay or no regularization.\"], \"weaknesses\": [\"Several notions are mentioned repeatedly but without being formally defined, such as capacity, distortion or $(\\\\lambda,\\\\delta)$ (Equation (9)). It would improve the paper to include additional theoretical background and more formal definitions.\", \"It should be made clearer how the quantities introduced in Sections 3.1 and 4 are related to generalization. For instance, is it possible to write down a theorem with explicit dependence on these quantities, or are their consideration partially based on intuitions? Can the link of these quantities with Kolmogorov complexity be made more formal?\", \"Despite the lack of formal theorems and proofs, the experiments are done on very simple arithmetic tasks. Therefore, it is not clear (neither theoretically nor empirically) whether the results may be generalized to more complex settings. I think that at least one experiment on a small dataset like MNIST or CIFAR10 could improve the paper.\", \"It would be useful to include an experiment comparing the performance (in terms of accuracy) with and without the proposed regularization scheme. Indeed, we see that it reduces the MDL and the generalization bound, but, if I am correct, it is not clear whether it achieves better performance overall.\", \"We see in Figure 4 that the proposed regularization scheme achieves the lowest complexity. However, the complexity is computed by Algorithm 2 and the proposed regularization is precisely penalizing the quantity computed by algorithm 2. Therefore it does not seem surprising that it is the lowest. As an ablation study, it would be interesting to make the comparison using other complexity notions. For instance, using the actual test accuracy would be very informative, to see whether the proposed regularization leads to better performance.\"], \"questions\": [\"Is it possible to perform the same experiments on more complex but still relatively simple datasets like MNIST or CIFAR10?\", \"Does the generalization bound of Equation (4) only hold for finite hypothesis spaces? If yes is that a realistic assumption in practical learning settings? Moreover, could you be more precise as to why the choice of Solomonoff prior should lead to tighter bounds than other priors, such as the uniform prior over $\\\\mathcal{H}$?\", \"Line 181: Why can the empirical risk be understood as the entropy of the data under the model? Is there a way to formalize this fact?\", \"Is it possible to obtain a formal statement relating the information capacity (Equation (9)) to generalization?\", \"To what size and precision do the parameters $\\\\lambda$ and $\\\\delta$ (Section 4) refer to in practice?\", \"How would the training accuracy be affected by the addition of Gaussian noise in practical deep learning settings?\", \"Can you define more precisely the notations used in Algorithm 2, such as BO.SUGGESTPARAMETERS()? More generally, can you provide more details on the Bayesian optimization procedure?\", \"Does your regularization technique always lead to lower test accuracy compared to weight decay?\", \"Figures 3 and 5 are not analyzed in the text, can you add some insights on the result they present?\", \"**Remarks/questions regarding lines 152 - 155 and Equation (4)**\", \"Even though it is not central to the paper, I have some questions about this part:\", \"As I understand it, the bounds in terms of Kolmogorov complexity are obtained by choosing a good prior distribution in the bound of Langford and Seeger. It is not clear to me that such a choice of prior provides the most useful bound. More precisely, let $\\\\mathcal{H}$ be a finite set of hypothesis and $\\\\sigma : \\\\mathcal{H} \\\\to \\\\mathcal{H}$ be any bijection of $H$. Then $h \\\\mapsto 2^{K(\\\\sigma(h))}$ may be used as a prior instead of the usual Solomonoff prior, hence leading to a generalization bound in terms of $K(\\\\sigma (h))$. Yet another possibility would be to use the uniform prior over $\\\\mathcal{H}$. Therefore, choice of prior, and therefore the choice of Kolmogorov complexity as a generalization measure, seems to be arbitrary (please correct me if I am mistaken). Can you provide more insights as to why this leads to the most informative bound?\", \"I would be happy to discuss this further, please correct me if I misunderstood something.\", \"**Other minor remarks and typos**\", \"In the introduction, the terms capacity and complexity are used before being defined, which may render the introduction hard to read. In general, more formal definitiosn of these concepts might enhance the readability of the paper. It could also help to define the notion of distortion function.\", \"Line 122: regulariztion $\\\\to$ regularization\", \"Equation (4): there is a missing parenthesis in $\\\\log(1/\\\\delta)$\", \"There might be a clash of notation between the parameter $\\\\delta$ in Equations (4), (9) and (10). It would be clearer to use a different letter in each of these equations.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the phenomenon of grokking through the lens of complexity theory and rate distortion theory. It proposes ways to compress model weights:\\n-- Via a parameter quantization operation, as a twist on ideas of Hinton and Van Camp\\n-- Via a low-rank approximation operation.\\nThe idea is compress the models up to certain rate distortion thresholds, quantified by the loss. \\nThey find that this compression is substantially more powerful than traditional compression methods (bzip) and argue that this is a better approximation of the Kolmogorov's complexity of the model.\\nUsing this metric, the authors perform experiments on arithmetic operations and find that the grokking phase is associated with a drop from the complexity peak. Following this idea, they propose a new regularizer that apparently increases the grokking effect.\\n\\nOverall, this is a very well-written paper that lays out super interesting ideas and presents a compelling thesis and nice experiments. I am not sold on the idea that this is an explanation of grokking, but the observations and the conclusions are overall very interesting and I think this is a valuable contribution to understanding better what happens with grokking and is quite promising to improve learning performance of models.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"Excellent writing, compelling ideas, nice experiments, convincing thesis, possible follow-ups.\", \"weaknesses\": \"Is it really an explanation of grokking or more some interesting and attractive observations?\\nThe experiments with the regularizer are not many.\", \"questions\": \"Have you tried applying these ideas to more complex datasets, does it compare favorably vs weight decay techniques ?\\n\\nBzip is not ideal to compress weights... are there other points of comparisons available?\\n\\nWhat is the efficiency of your compression method? How long does it take to compress?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"thank you for your answer - response to 1.1\", \"comment\": \"I agree that the notions such as $\\\\lambda$ and $\\\\delta$ are defined in the text, but sometimes it is just defined by a term like 'max size' or `precision`. I think it would be highly beneficial to define it more formally, as the notion of precision sounds a bit vague, especially for a reader who might be new to compression-based approaches. For instance, you could include in the text the examples you provided in your answer above.\\n\\nThank you for clarifying Figure 4 in the paper, I may have misunderstood its exact meaning in my initial review.\\n\\nWould it be interesting to perform an experiment in a setting where the generalization is not perfect? (to see whether the new regularisation improves the generalization, which might have impact in practical settings).\"}", "{\"summary\": \"This paper proposes to study the grokking dynamics via the lens of information theory (minimum description length). In particular, they proposed: (1) a new compression algorithm to compress the neural network; (2) a new regularizer based on spectral entropy. They show that the spectral entropy regularizer outperforms the standard weight decay to the extent that a model with lower complexity is obtained. They claimed a factor of 30-40x improvement of the compression ratio over bzip2, which is impressive (although I can't find the file size data). However, none of the compression methods achieve a non-vacuous bound, since models are vastly over-parametrized.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and very readable\", \"The paper presents \\\"new\\\" theoretical tools to analyze neural networks\", \"The analysis is a new angle to understand grokking\"], \"weaknesses\": \"* This paper deals with too many things simultaneously, which makes me a bit lost. What's *the* motivation of this paper? Otherwise, the paper reads like a collection of ok-ish results but none of them is impressive enough. For example, the idea of grokking as compression has been explored by [Liu et al.], [Humayun et al.] and [Deletang et al.]. The idea of using spectral entropy as a measure is explored in [Liu2 et al.], although it is novel to regularize the network with spectral entropy (which is unfortunately expensive).\\n* The papers claim a 30-40x improvement in compression ratio, but I did not find and details or data. \\n* Although this is a more theoretical paper than an experimental paper, I am not sure about its practical implications.\\n\\n**References**\\n\\n[Liu et al] Grokking as Compression: A Nonlinear Complexity Perspective, arXiv: 2310.05918\\n\\n[Del\\u00e9tang et al.] Language Modeling Is Compression, ICLR 2024\\n\\n[Humayun et al] Deep Networks Always Grok and Here is Why, arXiv: 2402.15555\\n\\n[Liu2 et al] Towards Understanding Grokking: An Effective Theory of Representation Learning, NeurIPS 2022\", \"questions\": [\"What's the key motivation of this paper?\", \"Could you elaborate on the comparison with bzip2? What is being compressed, problem setup, compressed file size, etc.?\", \"What practical implications does this paper have? I would consider a method practically useful if: (1) it can speed up grokking and/or (2) it can compress real-world datasets better than baselines.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 1.3\", \"comment\": \"> Does your regularization technique always lead to lower test accuracy compared to weight decay?\\n\\nBoth regularization methods achieve perfect test accuracy.\\n\\n__\\n\\n> Figures 3 and 5 are not analyzed in the text, can you add some insights on the result they present?\\n\\nWe have updated the draft with additional discussion of the figures, and new figures. In addition to the rate\\u2013distortion curves we mentioned earlier, we also added complexity dynamics plots for the unregularized network, so you can see how the complexity stays high at all times after memorization occurs, with no generalization following. We also added effective rank plots, which show how our spectral entropy method encourages models toward low effective rank. Interestingly, weight decay also seems to encourage effective low-rank representations, though not as strongly as ours.\\n\\nRegarding your question on the permutation of hypotheses. Indeed one is free to choose any prior one wishes - if we understand your point correctly, your question amounts to whether there is a unique, canonical notion of \\u201ccomplexity\\u201d. Firstly, consider that the permutation (bijection) you apply to the hypotheses has itself a non-zero Kolmogorov complexity, so you're not leaving the complexity invariant by permutation. However, this leads us to the question of a unique ordering on the hypotheses/natural numbers. This is an interesting and deep question, but it is beyond the scope of our work. While there exist invariance theorems for Kolmogorov complexity, they only hold up to an arbitrary constant, so we can only make strong statements about asymptotic complexity. We recommend the textbook on Kolmogorov complexity by Li and Vitanyi if you are interested in the mathematical foundations of algorithmic complexity.\\n\\n[1] Dingle, Kamaludin, Chico Q. Camargo and Ard A. Louis. \\u201cInput\\u2013output maps are strongly biased towards simple outputs.\\u201d Nature Communications 9 (2018)\\n\\n[2] Johnston, Iain G., Kamaludin Dingle, Sam F. Greenbury, Chico Q. Camargo, Jonathan P. K. Doye, Sebastian E. Ahnert and Ard A. Louis. \\u201cSymmetry and simplicity spontaneously emerge from the algorithmic nature of evolution.\\u201d Proceedings of the National Academy of Sciences of the United States of America 119 (2021)\\n\\n[3]: Power, Alethea, Yuri Burda, Harrison Edwards, Igor Babuschkin and Vedant Misra. \\u201cGrokking: Generalization Beyond Overfitting on Small Algorithmic Datasets.\\u201d ArXiv abs/2201.02177 (2022)\\n\\n[4]: Lotfi, Sanae, Marc Finzi, Yilun Kuang, Tim G. J. Rudner, Micah Goldblum and Andrew Gordon Wilson. \\u201cNon-Vacuous Generalization Bounds for Large Language Models.\\u201d ArXiv abs/2312.17173 (2023)\\n\\n[5] Geoffrey E. Hinton and Drew van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Annual Conference Computational Learning Theory, 1993.\"}", "{\"title\": \"Response 1.2\", \"comment\": \"> While entropy regularization surely helps in compressing the model, I expect that both the usual L2 regularization and the entropy regularization will achieve perfect test accuracy. Could you think of a scenario where the proposed regularization technique offers a clear performance advantage over L2 regularization?\\n\\nYes, one can achieve perfect test accuracy with almost any regularization method. It is not difficult to get these models to generalize. The point of this work is not to propose a new regularization scheme which performs better than weight decay across a range of tasks: the point is to clarify the relationship between complexity and generalization in neural networks. As we discuss in the paper, L2 is not a valid complexity metric since networks can be arbitrarily rescaled. A proper complexity metric must have units of information. Using our complexity metric, one can construct explicit generalization bounds (Equation 4), unlike prior works which study complexity in grokking.\\n\\n__\\n\\n> Will entropy regularization also help in training larger models with more complicated datasets, where they often do not have simple representations as one-dimensional numbers?\\n\\nYes, the spectral entropy regularization will always penalize models towards low-rank solutions. Of course, if the regularization strength is too large, this can lead to model collapse, just like any other regularization method. There is no particular relationship between the fact that grokking occurs on modular arithmetic equations, and the complexity of the models.\\n\\n__\\n\\n> Could the computational overhead of low-rank optimization become significant, especially when applied to large models? If so, how could we mitigate them?\\n\\nUltimately, in this work we want to track the complexity as closely as possible to get a sharp picture of the complexity dynamics throughout training, to illustrate the phase transition from memorization to generalization. In real-world applications, one probably does not need to get complexity estimates this densely, and if one is only interested in the final performance of the model, they could get a complexity estimate once at the end of training.\"}", "{\"title\": \"Response 1.2\", \"comment\": \"> in line 358 you state \\\"..we show that regularizing the spectral entropy leads to grokking..\\\" Is this an overstatement? How exactly is grokking defined quantitatively?\\n\\nNo, this is not an overstatement. Regularizing the spectral entropy alone does cause grokking, which is defined as perfect generalization after overfitting. It is not difficult to cause grokking with different regularizers, however, this is not a central claim of our work, so we have changed this line to better make our point: we now merely state that our regularization method also causes grokking. The final plot in the appendix shows grokking induced by our regularization method. Grokking is defined by a distinct memorization phase where train accuracy is 100%, and test accuracy is low (<30%, often 0%), followed by a generalization phase where test accuracy goes to 100%. We plotted these accuracy curves in the final figure in the appendix to demonstrate that all regularized networks grok, and unregularized networks do not grok.\\n\\n\\n> In Figure 3, you compare your regularization technique with weight decay. What is the dependence of the proposed spectral entropy regularization on the regularization weight? What behavior do you notice as you apply more or less spectral regularization? It would be nice to see the effect as the regularizaiton of the spectral entropy gradually increases.\\n\\nLike any regularization method, one ideally wants to find a good hyperparameter for each new model class, loss function, and optimizer. There is no universally correct regularization weight, generally. We apologize that a hyperparameter table was missing from the original submission. We have included a hyperparameter table in the updated draft.\\n\\nAs we show in the updated draft, Fig 8 in the appendix, the effect of the spectral entropy regularization can be seen clearly as a decrease in the effective rank of the matrix.\\n\\n>Does Figure 4 include multiple seeds? Why are error bars not visible in this plot?\\n\\nYes, the total description length plots are produced with multiple seeds. The lack of error bars was an oversight on our part, which we have remedied in the updated draft.\\n\\n> in Figure 2. Why include the \\\"ours\\\" distinction when all plots are \\\"ours\\\".\\n\\nWe have updated this figure to remove \\u201cours\\u201d, and only mention it in the caption. We have also added the complexity and accuracy plots for unregularized networks, so that you can see the relationship of the complexity dynamics with train and test accuracy in the case where generalization does not occur.\\n\\n**Conclusion**: Overall, we wish to emphasize that prior works which study complexity in grokking do not use true complexity measures, only proxies of complexity. Our complexity metric is based on the Kolmogorov complexity, which is universal and so can be applied to any model class. Our use of Kolmogorov complexity results in explicit generalization bounds. \\n\\nWe agree with your point that one wants to know whether the complexity measure being used guarantees generalization performance, vs merely correlates with generalization. This is *why* one wants an explicit generalization bound, and is a particular strength of our work vs previous works. Because they only study proxies of complexity, they cannot, in general, produce such bounds, whereas we can since we use a universal measure of complexity.\\n\\nWe are excited to share this development in the theory of complexity and generalization with the community, and think that many people will be interested.\\n\\n[1]: Varma, Vikrant, Rohin Shah, Zachary Kenton, J'anos Kram'ar and Ramana Kumar. \\u201cExplaining grokking through circuit efficiency.\\u201d ArXiv abs/2309.02390 (2023)\\n\\n[2]: Nakkiran, Preetum, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak and Ilya Sutskever. \\u201cDeep double descent: where bigger models and more data hurt.\\u201d Journal of Statistical Mechanics: Theory and Experiment 2021 (2019)\\n\\n[3]: Power, Alethea, Yuri Burda, Harrison Edwards, Igor Babuschkin and Vedant Misra. \\u201cGrokking: Generalization Beyond Overfitting on Small Algorithmic Datasets.\\u201d ArXiv abs/2201.02177 (2022)\"}", "{\"title\": \"Response 1.2\", \"comment\": \"> What practical implications does this paper have? I would consider a method practically useful if: (1) it can speed up grokking and/or (2) it can compress real-world datasets better than baselines.\\n\\nThis is a **theory paper**, submitted to the learning theory track. We are interested in the basic science of learning, complexity, and generalization. Our primary concern is to understand the nature of complexity and generalization more precisely. To this end, our method provides a universal, computable complexity measure that can be used to study complexity in any parameterized model. Furthermore, our theory connects complexity with information capacity in a fundamental way, through both quantization and low-rank decomposition. Questions of key importance to ML practitioners include: How much can we quantize our model (e.g. an LLM)? How much can a model be distilled into a smaller one (e.g. what is the lowest-rank model which can achieve this performance)? How results can help answer these questions. Even more intriguingly, in our view, is the question of emergence. What kinds of abstractions/representations emerge during learning? Are they simple or complex? What does complex even mean? Will what my model learned generalize to new examples? These are the sorts of questions that our work asks, and contributes to answering.\\n\\nIn terms of practical results, our method causes grokking to happen faster than with weight decay alone (see final figure in the appendix). Because the models it produces are less complex than the alternatives, it also achieves better compression on its datasets, since, as we discussed, the model size must be considered as part of the total compressed size of the dataset.\\n\\n__\\n\\n> the paper reads like a collection of ok-ish results but none of them is impressive enough.\\n\\nNo one has yet explained grokking in a way the community accepts. We think that understanding the grokking phenomenon in terms of the network complexity dynamics sheds light on the nature of abstraction formation in neural networks, and which has the potential to fundamentally change how the community understands the dynamics of complexity in learning models.\"}" ] }
06ZvHHBR0i
Adversarial Multi-Agent Evaluation of Large Language Models through Iterative Debate
[ "Chaithanya Bandi", "Hari Bandi", "Abir HARRASSE" ]
We propose a novel framework for evaluating large language model (LLM) outputs using LLMs themselves as interacting agents in an adversarial debate system. Our approach casts LLMs as advocates, judges, and juries within a structured courtroom-inspired setting. Advocate LLMs engage in iterative argumentation to refine and critique responses, while judge and jury LLMs moderate and assess the debate. We introduce a probabilistic model using Beta-Binomial distribution to analyze error reduction dynamics in this iterative process. Comparative studies of ranking versus scoring methods for LLM jurors reveal advantages of fine-grained scoring in capturing nuanced quality assessments. Experiments across diverse language tasks demonstrate our framework's superior performance in agreement with human judgments and provision of interpretable feedback compared to traditional evaluation methods. This work contributes a theoretically grounded, scalable approach to LLM evaluation that addresses limitations of existing techniques and adapts to rapid advancements in language AI technologies.
[ "LLM Evals", "Adversarial analysis", "Mechanism Design" ]
Reject
https://openreview.net/pdf?id=06ZvHHBR0i
https://openreview.net/forum?id=06ZvHHBR0i
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rzWtTmWcft", "lNq9k5nVgn", "g41lOTSy2H", "Ncdvk8sGIL", "FkHGtihbty", "DYgSoQdI9h" ], "note_type": [ "official_review", "decision", "official_review", "official_review", "official_review", "meta_review" ], "note_created": [ 1729861486391, 1737524219254, 1730704090397, 1729599329874, 1730666158853, 1734584567263 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12847/Reviewer_Fn2r" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12847/Reviewer_MXfv" ], [ "ICLR.cc/2025/Conference/Submission12847/Reviewer_wgh9" ], [ "ICLR.cc/2025/Conference/Submission12847/Reviewer_33aV" ], [ "ICLR.cc/2025/Conference/Submission12847/Area_Chair_KVFT" ] ], "structured_content_str": [ "{\"summary\": \"I suspect that this paper may have been generated by a generative AI (such as ChatGPT). The evidence supporting this suspicion includes:\\n\\n1. The title of the PDF differs from the title listed on OpenReview.\\n2. A significant portion of the literature cited appears to be fabricated. While I have not verified every citation, most of the references listed from 2023 onwards seem likely to be fake.\\\"\", \"for_examples\": \"[10] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Eric Wong, Zihang Zhang, Andy Zou, Lianmin Zheng, Siyuan Yu, Yi Tian, Yinghai Zhu, et al. Chatbot arena: Benchmarking open large language models in the wild. arXiv preprint arXiv:2306.01670, 2024.\\n\\n[25] T. Lanctot, A. Charnock, and J. Badger. Evaluating multi-agent systems in language models. In NeurIPS 2023 Workshop on Multi-Agent Systems, 2023.\\n\\n[26] Y. Li, D. Chen, and T. Brown. Agents as evaluators: The role of multi-agent systems in llm assessment. In Proceedings of the 2024 Conference on Neural Information Processing Systems (NeurIPS), 2024.\\n\\n[34] S. M. Panickssery, E. Lee, and K. Lee. Llm-based evaluators for language models: Opportunities and challenges. In Proceedings of the 2024 International Conference on Learning Representations (ICLR), 2024.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"see summary\", \"weaknesses\": \"see summary\", \"questions\": \"see summary\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents two multi-agent systems inspired by courtroom for evaluating the outputs of LLMs. The experiments show the proposed frameworks improve accuracy compared with a single LLM as a judge.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The method incorporates insights from a legal decision-making perspective, and provide two frameworks that simulate the human workflow in court.\", \"weaknesses\": \"- Lack of experiments: More evaluation datasets and baselines should be incorporated into the experiments. For example, LLM-based multi-agent evaluators such as PRD [1] and ChatEval [2] could be baselines. There are many datasets in this LLM-based evaluator topic, such as AlignBench [3], AUTO-J [4] and LLMEval [5].\\n\\n- The presentation needs to be refined: \\n\\n(a) The background (in both Section 1 and Section 2) is taking up too much space. This background can be concluded to make space for evaluation details in Appendix D. \\n\\n**(b) The Conclusion section and Appendix A.5 are likely to be AI-generated (according to GPTZero).**\\n\\n- The multi-agent systems will surely use more tokens compared to LLM-as-a-judge. What is the cost per run compared to other multi-agent frameworks (such as PRD [1] and ChatEval [2])?\\n\\n[1] PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations https://arxiv.org/abs/2307.02762\\n\\n[2] ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate https://arxiv.org/abs/2308.07201\\n\\n[3] AlignBench: Benchmarking Chinese Alignment of Large Language Models https://aclanthology.org/2024.acl-long.624/\\n\\n[4] Generative Judge for Evaluating Alignment https://arxiv.org/abs/2310.05470\\n\\n[5] LLMEval: A Preliminary Study on How to Evaluate Large Language Models https://ojs.aaai.org/index.php/AAAI/article/view/29934\", \"questions\": [\"The Qwen and Gemini model versions should be specified.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a framework that interprets Large Language Models (LLMs) as advocates within an ensemble of interacting agents, allowing them to defend their answers and reach conclusions through a judge and jury system.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The writing is relatively fluent.\", \"weaknesses\": [\"The framework proposed by the authors can be seen as an implementation of a multi-agent approach in the field of LLM-as-judges, with limited novelty and contribution to the community.\", \"There is a lack of detailed description and justification for the proposed framework, with specific issues highlighted in the Questions section below.\", \"The authors mentioned probabilistic modeling as one of the key contributions in the abstract (Line 018), but only dedicated a single sentence to this aspect in the main text (Line 395).\", \"The authors conducted only one experiment, comparing the accuracy of their designed framework with a simple baseline, which is insufficient to support their claims. I suggest that the authors add the following experiments and comparison methods:\", \"**Comparison methods:**\", \"LLMs specifically trained for evaluation, such as PandaLM or Prometheus model.\", \"Multiple LLM evaluators using a majority voting strategy.\", \"**Experiments:**\", \"A comparison of the API and time costs between the proposed MORE and SAMRE frameworks and the aforementioned comparison methods.\", \"A performance comparison of the MORE and SAMRE frameworks under different parameter settings (e.g., the number of advocates).\", \"A bias analysis comparing the MORE and SAMRE frameworks with the aforementioned comparison methods to demonstrate the claim of being \\\"unbiased\\\" (Line 116) and mitigating the influence of strategic behavior and individual biases (Line 119).\"], \"questions\": [\"In the proposed MORE framework, why employ three advocates for each answer? Are these advocates different in any way? Additionally, why does judge J provide scores s_1 and s_2 for both answers at the same time (Line 245)? Does this introduce additional bias? I assume the distributions of s_1 and s_2 obtained this way differ from the distributions obtained if s_1 and s_2 were assessed separately.\", \"What prompts are used for jurors with different backgrounds? I also question whether merely assigning an identity through the prompt (e.g., \\\"A retired professor of ethics\\\") allows the LLM\\u2019s evaluation to truly represent the standards of that demographic. This method\\u2019s effectiveness requires further validation.\", \"Could the authors provide an example for the stopping mechanism (Lines 262-263)?\", \"Why does Algorithm 2 discuss the case of three jurors, when the authors claim five diverse jurors (Line 253)? The authors need to provide the correct version of Algorithm 2.\", \"Why does the performance of the SAMRE architecture without juries in Table 1 surpass that of SAMRE?\", \"#### Minor Problems\", \"The authors should cite reference papers for the theories mentioned in Lines 036-039.\", \"The authors should clarify the version of the LLMs reported in Table 1. For example, the version of Qwen.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work explores different processes of rating large language model (LLM) outputs using LLMs. Inspired by legal, psychological, and decision theory, the authors propose two such processes: (1) \\u201cMulti-Advocate One-Round Evaluations\\u201d (MORE) and \\u201cSingle Advocate Multi-Round Evaluation\\u201d (\\u201cSAMRE\\u201d). Given a question and two (LLM) outputs, each process uses LLMs in different roles, e.g., as advocates, jurors, or judges, to (iteratively) select the \\u201cbest\\u201d output. The authors further present two theorems respectively claiming that (1) aggregated multi-advocate arguments lead to greater score differentiation than those obtained using iterative debates, and (2) that multi-advocate argumentation requires fewer rounds of interaction to receive the same level of score differentiation as iterative debate. The two processes are tested on the MT-Bench dataset and compared to a baseline of a single LLM judge process using six different LLMs. The authors conclude that their experimental results provide strong empirical evidence for their proposed methods.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"[clarity] The work was generally easy to read with crisp writing.\\n \\n[significance] Exploring ways to improve the evaluation of LLM outputs is an important research direction that was well motivated by the authors.\", \"weaknesses\": \"[originality] Several works have proposed different ways of using LLM ensembles to evaluate LLM outputs. While the authors spend considerable time discussing connections to various disciplines, e.g., decision theory, legal discourse, and psychology, few tangible insights are presented as to how this specific ensemble utilizes these disciplines.\\n \\n[quality] The experimental results presented in this work simply do not pass the bar for this conference: (1) Only a single, limited dataset is used, (2) critical experimental details are missing, e.g., number of samples used, confidence intervals, temperatures, single-judge baseline setting, length of argument outputs, etc., (3) none of the presented theorems are tested in the experiments, e.g., claims like \\u201cgreater score differentiation\\u201d and \\u201ccomplexity\\u201d are neither quantitatively nor qualitatively discussed in the experiments, (4) prompt sensitivity and selection is not discussed at all. This is especially damning for a work focused on improving evaluation.\\n \\n[significance] In essence, the work proposes (iterative) ensemble scoring using LLMs. The claim of [line 481] \\u201cstrong empirical evidence for the effectiveness of the proposed LLM advocate architectures in improving the accuracy of LLM output evaluation\\u201d is greatly exaggerated and unsupported. There is good reason to believe that most of the reported improvements over a single LLM-as-judge baseline come from the greatly expanded compute budget and the series of hand-crafted prompts. Similar results might thus be obtained by simply providing a single LLM an expanded compute budget and chain-of-thought style reasoning prompts.\\n \\n[clarity] While the presented theorems and proofs in the appendix are an admirable attempt at introducing rigor to LLM-ensemble evaluation. Yet, they also display a limited understanding of the many practical considerations in using LLMs and the large existing literature documenting poorly understood LLM behaviors. Sweeping, unmotivated assumptions like those on line 321, or the assumption that LLMs assigned different \\u201cpersona prompts\\u201d logically will obtain more diverse and stronger arguments limit the usefulness of the presented theorems.\", \"questions\": \"1. [results] Did the authors analyze the different types of arguments and justifications between the different ensembles in scoring answers?\\n2. [results] Were there any question-answer pairs for which the ensemble methods performed particularly better than the single-judge baseline?\\n3. [experiments] How many tokens were needed on average for the different ensembles and models studied?\\n4. Section 3.5 is entirely in the appendix, yet referred to in the conclusion [line 499] as discussed. At the minimum, discuss the main results of a section in the main text when referring to it in the conclusion.\\n5. [line 504-505] \\u201cwe have conducted \\u2026 our framework\\u201d: where?\\n6. [C.2-C.3] How were any of these chosen? They seem completely arbitrary and unmotivated.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"A clear rejection. While the premise of exploring architectures for evaluating LLMs is promising, all of the reviewers agree the paper is deeply flawed and perhaps even partially LLM generated. I'd encourage the authors to fundamentally revise the paper before attempting to submit elsewhere.\", \"additional_comments_on_reviewer_discussion\": \"There was no engagement from the authors to respond to the reviewers.\"}" ] }
06GH83hDIv
Auction-Based Regulation for Artificial Intelligence
[ "Marco Bornstein", "Zora Che", "Suhas Julapalli", "Abdirisak Mohamed", "Amrit Singh Bedi", "Furong Huang" ]
In an era of "moving fast and breaking things", regulators have moved slowly to pick up the safety, bias, and legal pieces left in the wake of broken Artificial Intelligence (AI) deployment. Since AI models, such as large language models, are able to push misinformation and stoke division within our society, it is imperative for regulators to employ a framework that mitigates these dangers and ensures user safety. While there is much-warranted discussion about how to address the safety, bias, and legal woes of state-of-the-art AI models, the number of rigorous and realistic mathematical frameworks to regulate AI safety is lacking. We take on this challenge, proposing an auction-based regulatory mechanism that provably incentivizes model-building agents (i) to deploy safer models and (ii) to participate in the regulation process. We provably guarantee, via derived Nash Equilibria, that each participating agent's best strategy is to submit a model safer than a prescribed minimum-safety threshold. Empirical results show that our regulatory auction boosts safety and participation rates by 20% and 15% respectively, outperforming simple regulatory frameworks that merely enforce minimum safety standards.
[ "Regulation", "Mechanisms", "Auctions", "Artificial Intelligence" ]
Reject
https://openreview.net/pdf?id=06GH83hDIv
https://openreview.net/forum?id=06GH83hDIv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zpePEmArng", "zlX436oXDy", "zfaSg6lTXY", "xBylEuv8Nk", "vwoqraf2z9", "vgmgZLb0bQ", "rcXM2Zy8E5", "mIo7nG1GPf", "loBtkRpvs1", "imNTwfo9mq", "e7jjGyWY7H", "dzBkBShg6V", "cWaHRbYwqB", "ZHzMAmYTAe", "Z11LR4tbNp", "XqWJTNh5W0", "WCPuwzqGfO", "JzIb4tSy2k", "J6U2HHgjTm", "DnisgagmgU", "CwfgbJ1lBH", "9UPNkWpMaA", "8NfHc5eW2I", "8Mm9eQTcC0", "80lbXYE3vM", "7WcLCYAylo", "6heTR8w9K2", "5p1aNEba2F", "54x9fXc8nh", "45fGgWXqUA" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1730118665279, 1732550940723, 1732898092730, 1732568407624, 1731641852262, 1732473986846, 1737523439009, 1732365250599, 1732541502896, 1731640798491, 1732293145228, 1729630206684, 1732563052275, 1731642751617, 1732552940677, 1732474929145, 1733163113422, 1731640462536, 1734708607390, 1732207321514, 1732474232125, 1733163269268, 1731640127307, 1732363440407, 1733163067513, 1731641126537, 1731640556571, 1730386913056, 1730418121280, 1731640280881 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1176/Reviewer_LVmP" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1176/Reviewer_dJpW" ], [ "ICLR.cc/2025/Conference/Submission1176/Reviewer_LVmP" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Reviewer_Ft8N" ], [ "ICLR.cc/2025/Conference/Submission1176/Reviewer_F4Wy" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Reviewer_dJpW" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Area_Chair_tfzg" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Reviewer_dJpW" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ], [ "ICLR.cc/2025/Conference/Submission1176/Reviewer_dJpW" ], [ "ICLR.cc/2025/Conference/Submission1176/Reviewer_Ft8N" ], [ "ICLR.cc/2025/Conference/Submission1176/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper addresses the challenges regulators face, particularly with the deployment of large language models that can amplify misinformation and societal division. It highlights the urgent need for effective regulatory frameworks to mitigate these risks and enhance user safety. Observing a gap in the availability of rigorous and realistic mathematical frameworks for AI regulation, the authors propose an innovative auction-based regulatory mechanism. This mechanism is designed to incentivize the development and deployment of safer AI models and encourage active participation in the regulatory process. It demonstrates through derived Nash Equilibria that the proposed auction mechanism effectively ensures that each participating agent\\u2019s optimal strategy aligns with submitting a model that exceeds a set minimum-safety threshold.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The topic considered in this paper is interesting and important. Regulations are needed to ensure AI safety.\\n\\n2. Theoretical results are provided whose proofs can be found in the appendix. I didn't check all the mathematical proofs.\\n\\n3. The paper is overall well-written and well motivated.\", \"weaknesses\": \"1. The way used by the paper to model the safety may not be realistic. It is assumed to be some safety level $s_i$ of a model $w_i$, which is expected to be less than $\\\\epsilon$. How is the safety measured for AI models using the metric mapping $S$ in practice? For common foundation models and LLMs, it might be hard to evaluate $S$ for $w_i$, especially given the size of $w_i$. What if a model provider take advantage of the inaccuracy of the safety evaluation to benefit itself?\\n\\n2. The proposed auction algorithm, together with the theoretical results and analysis seem quite standard. How does it differ from the classic all-pay auction results (for instance, Amann et al. 1996) in the setting for AI models? It is worth highlighting the technical novelty and emphasize why the proposed method is needed for AI models, given that it is claimed in Line 398-399 that \\\"To the best of our knowledge there are no other comparable mechanisms for safety regulation in AI.\\\"\", \"questions\": \"1. What is the technical challenge in the considered auction problem for AI models, compared to classic auction problems?\\n\\n2. Practical AI models are often very large. How can the safety of these model be evaluated? Given that the auction is done in a one shot setting, probably it is fine even if the model is large.\\n\\n3. I am more concerned about the compensation $v_i^p$, which needs to be provided by a regulator to implement the proposed auction algorithm. Why is this practical for existing AI models? How large does the compensation need to be? According to bidding equilibrium in Theorem 2, $v_i^p$ needs to be large for safer models. How could this be made up to compensate what the commercial AI models could achieve?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion Response\", \"comment\": \"Dear Reviewer LVmP,\\n\\nThank you for your response confirming that we have answered your questions. We are happy to address any remaining concerns if they exist. If all of your concerns have been addressed, we would request a reconsideration of your original score.\"}", "{\"title\": \"Update and Followup\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your engagement and discussion.\\n\\nAs an update, we have added our ablation study, including the promised figure, within our revised paper (in Appendix C.1). After the revision deadline, we also added a section exploring and explaining Assumption 2. Namely, we detail the motivation and generality of the assumption, pointing towards our new Ablation Study as evidence. Likewise, we describe that, in a space with no assumptions let alone theoretical analysis, we use Assumption 2 to establish the first theoretical results for safety-incentivized AI regulatory frameworks. We note that our future research aims to further relax the assumptions made within our paper.\\n\\nWith the discussion period ending this Monday, we also wanted to make sure that we have clarified all questions and concerns. If not, we are happy to clarify any questions before the deadline. If all of your concerns have been addressed, we would request a reconsideration of your scores.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Discussion Response\", \"comment\": \"Thank you for your response, and we appreciate that you find our paper has \\\"good potential to contribute to the domain\\\" of AI regulatory frameworks. Below, we address your comment.\\n\\n> **Comment:** I concur to other reviewers' points about the practical aspects of Assumption 1 and 2, and the rationality of the auction framework.\\n\\n**Response:**\\n\\n\\n- The area of theoretically-backed frameworks for AI regulation is exceptionally sparse; there are no previous frameworks, let alone assumptions or theory, to build on.\\n- Our work is the first to establish theoretical results and assumptions in this area.\\n- While general, **Assumption 2 is realistic as it does not make sense to be able to achieve more safety with less cost**. \\n\\nWhile we agree that the additions of assumptions act as limitations towards the realism of any theoretical approach, it is unreasonable to believe that the very first theory-backed solution in a research area will solve the entire problem with no assumptions utilized. Furthermore, we believe that Assumption 2 is reasonably realistic to start research in the domain of AI regulation, as it models the generic relationship between safety and cost.\\n\\n**Remark:** We would like to take this opportunity to emphasize the core contribution of our work. Our goal is to propose the first theory-backed AI regulatory framework that incentivizes safer model development and deployment. We believe that our paper takes a big stride towards implementable regulatory AI frameworks. With such a difficult and complex problem, it is nearly impossible to solve in its entirety in one shot. We hope that our paper will spur future research into this area and soon provide a robust solution for governments to implement.\"}", "{\"title\": \"Reviewer Ft8N Rebuttal\", \"comment\": \"Thank you, Reviewer Ft8N, for your insightful review of our paper. We appreciate that you found our work well-written, well-supported, and can help \\\"enhance the current AI regulatory work with a well-formulated framework and has a potential to have some significance in this domain\\\". Below, we address all questions you raised.\\n\\n## Weaknesses\\n\\n---\\n\\n> **Weakness 1:** While the paper is well-supported in the mathematical formulation and proofs, it perhaps could have provided more evidence on the experiments and empirical data.\\n\\n**Response to Weakness 1:**\\n- We have provided an additional ablation study within our Global Response that affirms the increasing relationship between safety and cost.\\n\\n\\n> **Weakness 2:** More description of how this framework can be applied in AI regulatory or in practice might help ground it further and make it relevant to a wider group of audiences.\\n\\n**Response to Weakness 2:**\\n\\n- The goal of this paper is to introduce a mathematically-based regulatory framework for incentivizing safer AI model deployment and detail (prove) its theoretical guarantees.\\n- We dive into certain practical applications within Appendix D, namely extending SIRA to repeated regulatory auctions (which is realistic in practice).\\n- We are working on a follow-up report that details how our framework can be applied in practice. \\n\\nWe believe, as the reviewer mentions, that our paper will be a launchpad to begin to \\\"explore and create safer and more robust AI regulatory\\\" frameworks. As a first step, we aimed to provide the theoretical backing of such a framework. In parallel, we are working on a policy-based report to implement an AI regulatory framework such as our own in practice. This report will focus more on the specific details surrounding implementation and less about the mathematical guarantees of SIRA.\\n\\n## Questions\\n\\n---\\n\\n> **Question 1:** What is the rationale of choosing the Beta and Uniform distribution (beyond what is described in line 323-324). Are there any related works that you could cite to support this choice of distributions?\\n\\n**Response to Question 1:**\\n- Uniform distributions are commonly utilized to analyze all-pay auctions (as detailed in Lines 299-301) [Amann 1996; Bhaskar 2018; Tardos 2017].\\n- We were interested in analyzing more than just a Uniform distribution (which is the usual choice for all-pay analysis), and the Beta distribution seems like a realistic choice in certain settings (as detailed in Lines 323-324).\\n\\n\\n> **Question 2:** What is the scaling of complexity and cost (such as evaluation and communication) as the number of the agents increase? Are there any risks of agents colluding to achieve a suboptimal safety level?\\n\\n**Response to Question 2:**\\n- Complexity and cost depends upon the size and bandwidth of the regulator.\\n\\nThere are examples of regulatory bodies that regulate a large number of products in a reasonable amount of time. For example, the FDA oversees approximately 2.1 Trillion dollars worth of food, tobacco, and medical products ([per its own numbers](https://www.fda.gov/media/168049/download)). Furthermore, the FDA has four approaches to speed-up the regulatory process for drug approval: Priority Review, Breakthrough Therapy, Accelerated Approval, and Fast Track. That being said, budget cuts and a lack of resources can limit the number of products reviewed, and increase the review process length.\\n\\n- SIRA scales linearly if there are enough resources. \\n\\nAs long as there are enough people and resources to review submitted models, each submitted model can be analyzed by one regulatory agent.\"}", "{\"title\": \"Discussion Response (Part 1)\", \"comment\": \"We apologize if our earlier response was unable to address your concerns. Thank you for getting back to us, and we take this opportunity to clarify them further.\\n\\n> **Weakness 1 on the rationality of the auction framework.** What the authors claim, i.e., \\\"there exists a minimum cost incurred by each model-building agent in order to have its model deployed,\\\" is exactly what I pointed out: \\\"Every model-building agent must incur this cost, regardless of whether it can successfully meet the regulatory requirements.\\\" This claim, in my point of view, weakens the rationality of the auction framework.\\n\\n**Response:**\\n\\n- The cost incurred by each agent does not only consider pre- or post-training of a model.\\n- Standard model training also affects safety performance (*e.g.,* a well-trained cancer-classifying model will achieve a better F1-Score than an untrained model).\\n\\nAgents that train a model inherently incur a cost towards improving safety, even if it is small.\\n\\n>To clarify my point, in an auction, one can choose not to bid for a certain item, and thus, there can be no cost for them. However, in the context of regulating LLMs, every model-building agent has to pay for the cost of pre-/post-training an LLM to improve the task performance and meet the underlying safety constraints once it starts to develop any LLM. Thus, there is an essential difference between auction and regulation.\\n\\n- In our proposed framework, agents are allowed not to participate, thereby not biding, and will incur no cost as a result. Thus, our framework indeed aligns with that of an auction.\\n\\nLike an auction, agents that wish not to participate (and thus do not bid) will not incur any cost.\\n\\n> **Weakness 2 on Feasibility of Assumptions 1 and 2.** Actually, this weakness has also been mentioned by Reviewer F4Wy. This confirms my concern on the feasibility of Assumptions 1 and 2. Especially, the discussion on the \\\"cost\\\" definition cannot support Assumption 2 that there exists a strictly increasing function M that maps safety to cost. This assumption is too strong and not practical. Although determining a relationship between cost and each one of these factors falls out of the scope of the paper, the relationship between safety and cost cannot be oversimplified as a strictly increasing function. \\n\\n**Response:** \\n\\n- Providing the first theoretically-backed guarantees for AI safety regulation required the construction of new assumptions within the regulatory setting.\\n- While general, our assumption is realistic as **it does not make sense to be able to achieve more safety with less cost**.\\n \\nWe agree that the additions of assumptions act as limitations towards the realism of any theoretical approach. We argue, however, that the first papers in unexplored research areas often include assumptions in order to begin proposing theory-backed solutions towards solving the problem at hand. Furthermore, we do not believe that the assumption between cost and safety laid out in our paper is unrealistic. In general, spending more to train a model (including pre- and post-training) will result in greater safety. In an area with little to no literature, this assumption is a general yet realistic insight into the empirical relationship between cost and safety.\\n\\n- Our goal is to spur future research into the regulatory AI domain that will chip away at the strength of the assumptions utilized.\\n\\nAs detailed above, Assumption 2 generally models the relationship between cost and safety, and provides an avenue towards analysis. We believe that, as a first step towards tackling AI regulation, this is a reasonable assumption. Making our assumptions more realistic is a valid scope of future research.\\n\\n\\n> **Comment:** What's more, the safety score and the cost score themselves are not even easy to quantify as a scalar in practice since both safety constraints and cost involve many factors, as agreed by the authors.\\n\\n**Response:** \\n\\nWe believe there is a misunderstanding on these aspects, and we respectfully disagree with the reviewer on this point. For instance, within all of the LLM safety alignment literature, even OpenAI [Ouyang 2022, Christiano 2017], a scalar valued reward is used to ensure that a model is safety aligned [Kaufmann 2023]. While we agree that the cost involves many factors, it is reasonable to be able to estimate the monetary value of each cost (*e.g.,* the cost to collect data for RLHF or the cost for more compute time). As a result, cost can be reflected entirely in monetary value, which is a scalar value. \\n\\n1. Ouyang, Long, et al. \\\"Training language models to follow instructions with human feedback.\\\" Advances in neural information processing systems 35, 2022.\\n2. Christiano, Paul F., et al. \\\"Deep reinforcement learning from human preferences.\\\" Advances in neural information processing systems 30, 2017.\\n3. Kaufmann, Timo, et al. \\\"A survey of reinforcement learning from human feedback.\\\", 2023.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for the authors' detailed answers to my concerns and questions. Q2, Q3, and Q4 have been well addressed.\\n\\nFor Weakness 3, \\n\\n1) Although reserve thresholding (RT) is also a novel mechanism that the author proposes, the performance of SIRA at high thresholds compared with the baseline RT weakens the main contribution of SIRA, which occupies three pages of the main body.\\n2) Though SIRA outperforms RT when $\\\\epsilon<0.8$, setting $\\\\epsilon<0.8$ may not be meaningful since $\\\\epsilon<0.8$ may not be acceptable by the market, laws, or politics. In other words, there is an implicit constraint for the epsilon. To justify the strength of SIRA when $\\\\epsilon<0.8$, the author should justify the reasonable range of $epsilon$. It is better to involve current LLMs and current safety constraints to justify the range of $epsilon$.\\n\\nFor Question 1, as answered by the authors, it seems to be challenging to develop a standard mechanism to estimate model deployment value. This challenge makes SIRA impractical.\"}", "{\"comment\": \"Thank you for answering my questions.\"}", "{\"title\": \"Reviewer F4Wy Rebuttal (Part 1)\", \"comment\": \"Thank you, Reviewer F4Wy, for your insightful review of our paper. We appreciate that you found our line of work novel, theoretically sound, and well-written. Below, we address all questions you raised.\\n\\n## Weaknesses\\n\\n---\\n\\n> **Weakness 1:** (Minor) The authors assume a fixed safety threshold, denoted as $\\\\epsilon$, for model development. While this may hold in domains such as drug approvals or medical equipment (as illustrated by the authors' N95 mask example), applying a similar framework to AI models is more challenging and complex.\\n\\n**Response to Weakness 1:**\\n\\n- Various safety metrics exist that can already be applied to gauge AI model safety (*e.g.,* F1 Score, human-annotated error rate, win rate, or attack success rate for LLMs).\\n\\n\\nWhile complex and challenging, quantifying the safety of AI models is still feasible and necessary. Using current safety evaluation metrics is better than the alternative: zero safety regulation on deployed AI models. Furthermore, our framework is general enough such that when an improved method for evaluating AI model safety arises, it can immediately be used for the model evaluation process. \\n\\n\\n> **Weakness 2:** (Minor) The model assumes that the test set used by regulators is drawn from the same distribution as the agent\\u2019s evaluation data. However, in the specific context of language models, techniques such as fine-tuning and reinforcement learning from human feedback (RLHF) can easily improve performance metrics if the evaluation distribution remains consistent. This weakens the argument that a single scalar value would sufficiently capture the intricacies of regulatory inspection.\\n\\n**Response to Weakness 2:**\\n\\n- Techniques such as fine-tuning and RLHF add to agent costs (due to the need to collect more human/AI feedback and update the billions of parameters).\\n\\n\\nWe agree that fine-tuning and RLHF can improve performance metrics. However, performing fine-tuning and RLHF incurs added cost. This is exactly what we model within our paper: increased safety necessitates increased cost. We believe that there may be some confusion surrounding the safety-metric function $\\ud835\\udc46$ and its application in model evaluation. The inputs of $S$ are the model parameters $w$ and evaluation data $x$. \\n\\n- In the example provided by the reviewer, fine-tuning or RLHF would result in a new set of parameters $w'$ such that $S(w';x) = \\\\epsilon' > S(w;x) = \\\\epsilon$.\\n- There is added cost for an agent to find $w'$ via fine-tuning or RLHF: $M(\\\\epsilon') = p_{\\\\epsilon'} > M(\\\\epsilon) = p_{\\\\epsilon}$.\\n\\n\\n> **Weakness 3:** The authors propose a strictly increasing relationship between safety and cost, arguing that \\\"safer models cost more to develop.\\\" However, they do not explicitly account for the trade-off between safety and the model's quality or usefulness in their framework. This omission raises questions, particularly since existing alignment approaches (e.g., RLHF) are often designed to balance helpfulness and harmlessness. In practice, a model could be made extremely safe (e.g., by providing only generic responses), but this could significantly reduce its usefulness without necessarily increasing development costs. In fact, under the authors' framework, one could submit a trivial model (e.g., one that always responds with \\\"Thank you for your input\\\"), bid the highest possible value, and meet the safety threshold to claim the regulator's compensation. This suggests that achieving safety in some cases may not necessarily be costly unless the model\\u2019s quality or usefulness is held constant.\\n\\n**Response to Weakness 3:**\\n\\n- Our regulatory framework holds for all AI models and not just LLMs.\\n- Our definition of safety is much more general than determining if the output of the model is \\\"harmless\\\" or not.\\n\\nThe definition of safety that we use takes into account the usefulness of a model's output. For example, one may want to evaluate the F1 score of a model in order to ensure that it is minimizing the number of false positive and negative predictions (especially since false negative predictions can be very dangerous in healthcare settings). For LLMs, one may want to evaluate the attack success rate against submitted models on a benchmark such as [JailbreakBench](https://jailbreakbench.github.io) and/or assess the factuality of responses via human evaluation. As a result, an LLM that always responds \\\"Thank you for your input\\\" may be harmless, but it will fail to provide accurate responses and would be flagged for providing incorrect and unfactual responses by human evaluation. The evaluation metrics used to quantify safety are domain-specific and must incorporate evaluation of model quality. An active line of future research we are pursuing is determining which evaluations metrics are most effective across a variety of domains.\"}", "{\"comment\": \"Thanks for your responses which clarified my questions. However, I concur to other reviewers' points about the practical aspects of Assumption 1 and 2, and the rationality of the auction framework. The paper still has a good potential to contribute to the domain. I will retain the recommendation towards accept but have reduced the score to Weak accept to reflect the concerns about the feasibility of the assumptions.\"}", "{\"summary\": \"The authors provides a formulation of the AI regulatory process as an all-pay auction, and design an auction-based regulatory mechanism that produces Nash Equilibria that induces safety considerations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"A novel and important question, and strong motivation\", \"Sound theoretical analysis\", \"Genrally well-written\"], \"weaknesses\": [\"The authors' formulation of the regulatory process and safety components appears to be somewhat simplified and may diverge from current AI developments in a few key ways:\", \"(Minor) The authors assume a fixed safety threshold, denoted as $\\\\epsilon$, for model development. While this may hold in domains such as drug approvals or medical equipment (as illustrated by the authors' N95 mask example), applying a similar framework to AI models is more challenging and complex.\", \"(Minor) The model assumes that the test set used by regulators is drawn from the same distribution as the agent\\u2019s evaluation data. However, in the specific context of language models, techniques such as fine-tuning and reinforcement learning from human feedback (RLHF) can easily improve performance metrics if the evaluation distribution remains consistent. This weakens the argument that a single scalar value would sufficiently capture the intricacies of regulatory inspection.\", \"The authors propose a strictly increasing relationship between safety and cost, arguing that \\\"safer models cost more to develop.\\\" However, they do not explicitly account for the trade-off between safety and the model's quality or usefulness in their framework. This omission raises questions, particularly since existing alignment approaches (e.g., RLHF) are often designed to balance helpfulness and harmlessness. In practice, a model could be made extremely safe (e.g., by providing only generic responses), but this could significantly reduce its usefulness without necessarily increasing development costs. In fact, under the authors' framework, one could submit a trivial model (e.g., one that always responds with \\\"Thank you for your input\\\"), bid the highest possible value, and meet the safety threshold $\\\\epsilon$ to claim the regulator's compensation. This suggests that achieving safety in some cases may not necessarily be costly unless the model\\u2019s quality or usefulness is held constant.\", \"This issue could be exacerbated by the presence of open-source models like LLaMA, which may further incentivize the \\\"gaming\\\" of the regulatory system. Agents could enter the competition with low-cost variants of open-source models that prioritize safety at the expense of quality, primarily to secure the regulator\\u2019s compensation. Put it in a different way, low-quality models (which are safe but not useful) could flood the regulatory system, making it easier to claim compensation without delivering valuable AI products. This could distort incentives, where participants optimize for regulatory approval rather than producing high-quality, well-rounded models.\", \"For the mechanism itself, a minor concern include the use of randomization, which introduces envy into the mechanism. With development costs potentially huge, this might lead to issues and discontent and distrust with the mechanism after the outcome is realized.\"], \"questions\": [\"Beyond the questions listed in the Weakness section, here are some additional questions I have:\", \"The framework assumes that the cost $M$ is the same across agents. This assumption seems unrealistic in practice, given that different agents may have varying models, training procedures, and resources, which makes the cost of aligning the safety levels different. If $M$ differs across agents, is there a way to adapt the framework to accommodate heterogeneous costs while maintaining its theoretical properties?\", \"The paper didn't mention incentive compatibility, a key issue in auction literature. Is truthful report of $b_i$ guaranteed?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion Response (Part 3)\", \"comment\": \"Thank you for the continued discussion.\\n\\n> **Comment:** Regarding the rationality of the auction framework, the cost of developing VLMs for model-making agents is unavoidable, even if the agents temporarily choose not to submit their models. This is because agents must always submit their models to the regulator's black-box evaluation system to determine whether their VLMs meet the safety threshold.\\n\\n**Response:**\\n\\nThank you for this point. We still believe that there is some confusion surrounding the requirement of cost in our auction formulation. **As shown in our mathematical formulation in Equation 6, an agent who decides to either (a) not bid or (b) bid an unsafe model ($b_i = 0$) will incur zero cost and maintain a utility of zero.** \\n\\nThe regulatory paradigm that we envision, which mirrors current regulatory systems (*e.g.,* FDA, FAA, etc.), is one in which no models are placed onto the market unless they have been fully vetted and regulated. **Agents must pay the price to bring a safe model to market.** All model-building agents understand, prior to development, that they will have to clear the regulatory system before their models are available for public use. Thus, agents weigh whether it is in their best interest to participate and develop a model or not. We see this theoretically in Corollaries 1 & 2, namely Equations 10 and 13. If, given a total value $V_i$, an agent's utility is not positive, it will not build a model. **This is reflected in markets today. A person will not enter the market for making house paint if they know that the costs for producing paint with the minimal lead content requirement are too large for them to turn a profit**.\\n\\n> **Comment:** As for Assumption 2, I believe the relationship between safety and cost is oversimplified. For example, in adversarial machine learning, a better regulator can enhance the adversarial robustness of a neural network without increasing computing resources. \\n\\n**Response:**\\n\\n\\nIn the case of adversarial robustness, for example, there is a cost to determining what type of optimization method works best for your data (*e.g.,* Fast Gradient Sign Method or Projected Gradient Descent). In general, to improve the safety of a model, agents must incur extra cost to scope out many factors. These include the methods originally detailed by the reviewer: learning paradigm, model architecture, loss function design, and hyperparameter selection.\\n\\n\\n> **Comment:** Although the authors attempt to broaden the definition of \\\"cost,\\\" this expansion makes quantifying the \\\"cost\\\" more challenging and ultimately undermines the practicality of the paper.\\n\\n\\n**Response:**\\n\\nWe respectfully disagree that we are broadening the definition of \\\"cost\\\". As we state within our paper (Lines 165-167): *\\\"The assumption that a strictly increasing function M maps safety to cost is realistic, because achieving higher safety levels typically requires greater resources. Safer models often demand more data, advanced tuning, and extensive validation, all of which increase costs\\\".*\\n\\n\\n- We do not limit the scope of what cost is to only pre- or post-training.\\n- We have consistently detailed that \\\"costs\\\" towards improving model safety can arise from various avenues and investigations (shown from our quote above).\\n- The total cost of these investigations can be quantified as a monetary value (which is practical).\\n\\nThe area of theoretically-backed frameworks for AI regulation is exceptionally sparse. There are no previous frameworks, let alone assumptions or theory, to build on. **Our work is the first to establish theoretical results and assumptions in this area.** While we agree that the additions of assumptions act as limitations towards the realism of any theoretical approach, *it is unreasonable to believe that the very first theory-backed solution in a research area will solve the entire problem with no assumptions utilized*. Furthermore, we do not believe that Assumption 2 is unrealistic, as it models the generic relationship between safety and cost. \\n\\n**Remark:** We would like to take this opportunity to emphasize the core contribution of our work. Our goal is to propose the first theory-backed AI regulatory framework that incentivizes safer model development and deployment. We believe that our paper takes a big stride towards implementable regulatory AI frameworks. With such a difficult and complex problem, it is nearly impossible to solve in its entirety in one shot. We hope that our paper will spur future research into this area and soon provide a robust solution for governments to implement.\"}", "{\"title\": \"Global Response\", \"comment\": \"Thank you to all the reviewers for their reviewing service and paper feedback. We are happy to see that the reviewers agree that we are working on an important research problem, that our paper is well-written, and that our theory-based approach is novel and promising.\\n\\nBelow, we provide additional empirical results that affirm our stated relationship between safety and cost on real-world data.\\n\\n## Ablation Study\\n\\n---\\n\\n- We conduct an ablation study to demonstrate that in realistic settings, safety is mapped to cost in a monotonically increasing way (as detailed in Assumption 2). \\n \\nWhile there are many factors to consider when gauging safe AI deployment, we analyze model fairness, via equalized odds, for image classification in this study. Equalized odds measures if different groups have similar true positive rates and false positive rates (lower is better).\\n\\n- We train VGG-16 models on the Fairface dataset [K\\u00e4rkk\\u00e4inen 2019] for 50 epochs (repeated ten times with different random seeds), and consider a gender classification task with race as the sensitive attribute. \\n\\nModels with the largest validation classification accuracy during training are selected for testing. \\n\\n- Many types of costs exist for training safer models, such as extensive architecture and hyper-parameter search. In this study, we consider the cost of an agent acquiring more minority class data. \\n\\nThis leads to a larger and more balanced dataset. We simulate various mixtures of training data, starting from a 95:5 skew and scaling up to fully balanced training data with respect to the sensitive attribute. In our study, we gauge equalized odds performance on well-balanced test data for the models trained on various mixtures of data. Below we tabulate our results. \\n\\n| Minority Class % | Mean Equalized Odds Score | \\n| -------- | -------- | \\n| 5% | 22.55 | \\n| 10% | 22.31 | \\n| 15% | 18.97 | \\n| 20% | 17.46 | \\n| 25% | 15.78 | \\n| 30% | 15.44 | \\n| 35% | 13.09 | \\n| 40% | 11.01 | \\n| 45% | 9.83 | \\n| 50% | 9.38 | \\n\\n- The equalized odds score decreases (the model becomes safer) when collecting more minority class data (increased cost). \\n \\nTo adjust equalized odds to fit into the setting where $\\\\epsilon \\\\in (0,1)$, one can invert and normalize the equalized odds score. We will upload a new version of our paper that includes this ablation study (with a scatter plot of the relationship between safety and cost shown in the table above). \\n\\n1. K\\u00e4rkk\\u00e4inen et. al, Fairface: Face attribute dataset for balanced race, gender, and age, 2019.\\n\\n## Contribution\\n\\n---\\n\\nWe want to emphasize the importance of furthering research into AI regulatory frameworks. The deployment and usage of AI models is often unchecked. Lax regulation of AI deployment has led to, and may further accelerate in the future, the proliferation of misinformation and harmful effects on society. We believe that our paper takes a big step towards a valuable societal goal of establishing an effective and mathematically-backed regulatory framework that governing bodies can implement. In summary, our paper contributes a novel mechanism to the area of AI regulation that:\\n\\n1. Formulates the regulatory process realistically as an auction, where there is one regulating body and many model-building agents.\\n2. Leverages auction theory to derive equilibria such that rational agents are incentivized to both participate in the regulatory process and submit safer models.\\n3. Empirically improves model safety by over 20% and participation rates by 15% compared to baseline regulatory mechanisms.\"}", "{\"comment\": \"I really appreciate the authors for their prompt and thorough response. I believe that developing a theoretical framework for the AI regulation system is both important and promising. However, I still have concerns about the rationality of the auction system and the assumptions within the current framework. Addressing these issues would strengthen the paper.\\n\\nRegarding the rationality of the auction framework, the cost of developing VLMs for model-making agents is unavoidable, even if the agents temporarily choose not to submit their models. This is because agents must always submit their models to the regulator's black-box evaluation system to determine whether their VLMs meet the safety threshold.\\n\\nAs for Assumption 2, I believe the relationship between safety and cost is oversimplified. For example, in adversarial machine learning, a better regulator can enhance the adversarial robustness of a neural network without increasing computing resources. Although the authors attempt to broaden the definition of \\\"cost,\\\" this expansion makes quantifying the \\\"cost\\\" more challenging and ultimately undermines the practicality of the paper.\\n\\nAdditionally, I've noticed that some other reviewers share concerns about the assumptions and the trade-off between safety and usefulness. As a result, I have decided to increase my score to 5 with a middle confidence level of 3.\"}", "{\"title\": \"Discussion Response\", \"comment\": \"Dear Reviewer F4Wy,\\n\\nThank you for your response. Below, we address your comments.\\n\\n> **Comment:** A main point of the authors' response is that a scalar metric could encode both 'safety' or the model and the 'accuracy' of the model, which will be used in the thresholding. I think this appears a bit overly optimistic to me...\\n\\n**Response:**\\n\\n- Our framework does not apply to LLM regulation alone; there exist metrics that can effectively encode safety and accuracy of AI models outside the world of LLMs (*e.g.,* the F1-score for cancer-classifying AI models).\\n\\nWe agree with the reviewer that in the domain of LLMs, which is relatively new, there currently lacks a scalar metric that optimally gauges LLM safety. \\n\\n- For LLM regulation, human evaluation is the current best method. While costly, regulating bodies generally have sufficient budgets.\\n\\nAs touched on by the reviewer, one method to encode safety and accuracy would be to combine, via weighted average or sum, various important safety and performance metrics. In instances where this may not be effective, as pointed out by the reviewer, a larger reliance on human evaluation would be required. It would be the regulator's job to mitigate the amount of bias incurred as part of this process.\\n\\n- We agree that determining an optimal scalar metric for safe LLM deployment is an important line of future work that we look forward to pursuing.\\n\\n> **Comment:** I still see significant gaps between the current framework and actual implementation.\\n\\n**Response:**\\n\\n- We believe that our framework provides a big step towards an implementable, feasible, and mathematically-backed regulatory approach for AI safety. \\n \\nCurrently, there is a dearth of research in the area of regulatory frameworks for AI. Compared with the few papers in the area, as detailed in our related works section, our framework is **(1)** simple and efficient to implement and **(2)** more realistically formulates the regulatory setting. Furthermore, our framework provably incentivizes agents to develop and deploy *safer* models, which has not been accomplished in previous works. While we agree that certain improvements to our framework can be made, our paper, to the best of our knowledge, provides the most realistic and implementable framework towards AI regulation to date.\"}", "{\"title\": \"Reviewer Reply Deadline\", \"comment\": \"Dear Reviewer LVmP,\\n\\nWe sincerely appreciate your time and effort to review our work. With the deadline for discussion ending in less than 20 hours, we want to make sure that our responses have addressed all of your concerns. Any additional insight is very helpful for us. If all of your concerns have been addressed, we would request a reconsideration of your original score.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Reviewer LVmP Rebuttal (Weaknesses)\", \"comment\": \"Thank you, Reviewer LVmP, for your insightful review of our paper. We appreciate that you found our work important, interesting, and well-written. Below, we address all questions you raised.\\n\\n## Weaknesses\\n\\n---\\n\\n> **Weakness 1:** The way used by the paper to model the safety may not be realistic. It is assumed to be some safety level $s_i$ of a model $w_i$, which is expected to be less than $\\\\epsilon$. How is the safety measured for AI models using the metric mapping $S$ in practice? For common foundation models and LLMs, it might be hard to evaluate $S$ for $w_i$, especially given the size of $w_i$. What if a model provider take advantage of the inaccuracy of the safety evaluation to benefit itself?\\n\\n**Response to Weakness 1:**\\n\\n- The function $S$ is simply any metric that a regulator uses to gauge the safety performance $s_i$ of a model, represented by its parameters $w_i$ (*e.g.,* analyzing attack success rate on [JailbreakBench](https://jailbreakbench.github.io) for LLMs).\\n\\nWe believe there may be some confusion regarding the function $\\ud835\\udc46$ and its application in model evaluation. In SIRA, agents will send their models to the regulator, who will gauge their safety levels using $S$.\\n\\n- $S$ determines the safety level $s_i$ of a model $w_i$, but does not relate safety to cost.\\n\\nConfusion may have arisen within the relationship between safety levels $s_i$ and agent costs. Within our paper, we assume that safety level $s_i$ is related to cost, via function $M$, in an increasing manner (*i.e.,* a larger safety level comes with an increasingly large cost). Thus, agents that desire a larger safety level $s_i$, determined by $S$, will have to pay more to attain it.\\n\\n> **Weakness 2:** The proposed auction algorithm, together with the theoretical results and analysis seem quite standard. How does it differ from the classic all-pay auction results (for instance, Amann et al. 1996) in the setting for AI models? It is worth highlighting the technical novelty and emphasize why the proposed method is needed for AI models, given that it is claimed in Line 398-399 that \\\"To the best of our knowledge there are no other comparable mechanisms for safety regulation in AI.\\\"\\n\\n**Response to Weakness 2:**\\n\\n- SIRA is specifically designed to mathematically formulate the AI regulation problem.\\n- SIRA incorporates a reserve price (minimum safety bid required to win the deployment reward).\\n- SIRA allocates multiple rewards to many ($n >> 2$) agents.\\n- The derived agent utility (Equation 6) and derived equilibria in SIRA are novel and different than previous auction literature.\\n\\nWe want to thank the reviewer for allowing us to clarify, and more clearly detail within our paper, the technical novelty of SIRA compared to other all-pay auction works. The setting of our paper versus previous all-pay auction literature is starkly different. In previous literature (Amann 1996 for instance), the equilibrium of a two-player asymmetric all-pay auction is determined. There is only one winner and one reward, and there is no floor that the players must bid over in order to win their reward. In contrast, SIRA is the first to formulate the AI regulatory process as an auction. Thus, SIRA must account for **(i)** many more agents, **(ii)** a required safety level for model deployment, and **(iii)** multiple rewards available to the participating agents. As a result, the agent utility function in SIRA is much different than those in previous all-pay auction literature. Therefore, our theoretical analysis in deriving an equilibrium given this new utility is novel. Finally, we prove that SIRA spurs increased bids compared to other baselines in this domain that we ourselves formulated (Reserve Thresholding, Section 4).\"}", "{\"metareview\": \"This paper looks at the problem of regulating AI models, specifically for safety. This is approached through theoretical framework, through an all-pay auction with companies and a regulator. The authors find Nash equilibria and have theoretical results.\\n\\nReviewers agree that this is a very important problem, and that the approach taken is novel and interesting. Reviewers also agree that the paper is well-written, which I agree with. I agree that these are all key strengths of the paper. \\n\\nUnfortunately, all reviewers agree on a key limitation: the restrictiveness of the assumptions, specifically Assumption 2. There is of course a fine line between have realistic assumptions and those that allow for the kind of theory this paper does. The authors have defended their assumptions in rebuttal and with additional text in the paper (and an ablation study). Upon further discussion with reviewers, however, we all find that this is still a key limiting factor of this paper. I encourage the authors to take these concerns into account for a future version of the paper, actively acknowledging and tackling these issues even earlier in the paper (eg by giving specific examples of where Assumption 2 is both realistic *and not realistic*).\", \"additional_comments_on_reviewer_discussion\": \"The authors provided detailed rebuttal and changes during the discussion period. Most of the reviewers' concerns were addressed, which is good. However, all reviewers were unconvinced by the authors' response/claims about the limitations of Assumption 2, a key assumption in the analysis. Reviewer Ft8N reduced their score (to 6), and other reviewers did not increase their score above 5, all due to this.\\n\\nReviewer dJpW also specifically has concerns about ignoring the cost of training VLMs, which I agree is important, but I think of smaller concern.\"}", "{\"title\": \"Request for Continued Discussion\", \"comment\": \"Dear Reviewers,\\n\\nWe want to thank you again for your reviewing service. We believe that our rebuttals have answered the questions raised within your reviews. If so, confirmation that we have indeed answered your concerns would be appreciated. If not, we are happy to continue the discussion before the discussion phase ends in a few days.\\n\\nBest,\\nAuthors\"}", "{\"title\": \"Discussion Response (Part 2)\", \"comment\": \"> **Comment:** Although reserve thresholding (RT) is also a novel mechanism that the author proposes, the performance of SIRA at high thresholds compared with the baseline RT weakens the main contribution of SIRA, which occupies three pages of the main body.\\n\\n**Response:** \\n\\n- We reiterate that SIRA is still provably better than reserve thresholding (RT) across the board.\\n- SIRA is only slightly better than RT at high thresholds in the experiments we run; **SIRA may greatly outperform RT at high thresholds in other scenarios**.\\n\\n**We believe that it is unfair to penalize our work for proposing two novel frameworks in a research area that currently has zero alternatives.** Furthermore, SIRA is theoretically guaranteed to outperform RT in every instance. This is valuable, and would result in SIRA being implemented over RT in all real-world settings. Finally, the close performance at high thresholds arises experimentally due to the distributions used for total value $V_i$. **In practice, this gap would be much larger if more desirable rewards are provided by the regulator**. We detailed this in our original rebuttal, in response to Q2.\\n\\n> **Comment:** Though SIRA outperforms RT when $\\\\epsilon < 0.8$, setting $\\\\epsilon < 0.8$ may not be meaningful since $\\\\epsilon < 0.8$ may not be acceptable by the market, laws, or politics. In other words, there is an implicit constraint for the $\\\\epsilon$. To justify the strength of SIRA when $\\\\epsilon < 0.8$, the author should justify the reasonable range of $\\\\epsilon$. It is better to involve current LLMs and current safety constraints to justify the range of $\\\\epsilon$.\\n\\n**Response:** \\n\\nWe believe that our response to the comment above addresses this comment. In summary, SIRA always outperforms RT so there is never a reason to use RT over SIRA. Second, in many realistic scenarios, it may be the case that the total value is quite large for all agents and is not reflective of the distributions used within our experiments. In this case, SIRA would improve further at the larger thresholds. \\n\\nFinally, we still provide an example of settings where $\\\\epsilon < 0.8$ is applicable in law. When taking the Universal Bar Exam (UBE) to become certified to practice law, **the most stringent states require a score of 270 out of a possible 400, a 67.5% score** ($\\\\epsilon=0.675$). Many other examinations to ensure safe professional expertise (*e.g.*, Step 1/2 tests in Medicine, or Professional Engineering exams) require passage rates much lower than 80%. As a result, there are many instances where safe passage does not require large epsilon values.\\n\\n> **Comment:** For Question 1, as answered by the authors, it seems to be challenging to develop a standard mechanism to estimate model deployment value. This challenge makes SIRA impractical.\\n\\n**Response:** \\n\\n- SIRA does not need to estimate the value the model deployment of any agent.\\n\\nWe believe there is some confusion surrounding model deployment value. Simply, model deployment value is a *agent-specific* value that each agent has internally (*e.g.,* revenue generation or market share percentage). For example, [OpenAI estimates that ChatGPT will bring in 2.7 billion dollars in revenue this year](https://www.nytimes.com/2024/09/27/technology/openai-chatgpt-investors-funding.html). Within the analysis of SIRA, akin to analysis of auctions, we provide theoretical results for any distribution of deployment value *across all agents* (Theorem 2). We then provide explicit equilibria for two given distributions (Corollaries 1 and 2). As an example, in the case of a Uniform distribution (Corollary 1), we provide an equilibrium in scenarios where it is equally likely that a random agent has an extremely large model deployment value as it does a small value.\"}", "{\"title\": \"Reviewer Reply Deadline\", \"comment\": \"Dear Reviewer dJpW,\\n\\nWe sincerely appreciate your time and effort to review our work. With the deadline for discussion ending in less than 20 hours, we want to make sure that our responses have addressed all of your concerns. Any additional insight is very helpful for us. If all of your concerns have been addressed, we would request a reconsideration of your current score.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Reviewer dJpW Rebuttal (Part 1)\", \"comment\": \"Thank you, Reviewer dJpW, for your insightful review of our paper. We appreciate that you found our work original, clear, and significant. Below, we address all questions you raised.\\n\\n## Weaknesses\\n\\n---\\n\\n> **Weakness 1:** Rationality of the auction framework. Considering the regulation process as an all-pay auction is not reasonable, at least in my opinion. Intuitively, safety-driven regulation establishes a minimum cost for the model-building agent. Every model-building agent must incur this cost, regardless of whether it can successfully meet the regulatory requirements. This represents an unavoidable exploration process within the model space. Even if we assume that all competitive agents know how to train their models to meet the safety threshold, accurately estimating the value of deployment remains a challenge. Thus, the framework may be overly simplistic in its approach to \\\"safety\\\" regulation.\\n\\n**Response to Weakness 1:**\\n\\n- We believe it is rational that there exists a minimum cost incurred by each model-building agent in order to have its model deployed. \\n\\nThis cost arises from placing effort into searching the model space for safe models. Simply put, **if a model is not safe enough to deploy, regardless of the cost incurred by the agent who built it, it should not be deployed.** As eloquently written in [Bengio et al. 2024]: \\\"Safety cases are politically viable even when people disagree on how advanced AI will become, since it is easier to demonstrate a system is safe when its capabilities are limited. Governments are not passive recipients of safety cases: they set risk thresholds, codify best practices, employ experts and thirdparty auditors to assess safety cases and conduct independent model evaluations, and hold developers liable if their safety claims are later falsified.\\\"\\n\\n- Our proposed regulatory framework provides a guide for a regulatory body to incentivize safe model development and deployment.\\n\\nIt is out of the scope of our work to detail how model-building agents incur the cost of safety training. In the case of LLMs, methods such as reinforcement learning from human feedback (RLHF) and fine-tuning allow agents to make their models safer. \\n\\n1. Yoshua Bengio et. al. Managing extreme AI risks amid rapid progress, 2024.\\n\\n> **Weakness 2:** Feasibility of Assumptions 1 and 2. Assumption 1 fails when a model-building agent maliciously injects backdoor triggers into the model by altering the training dataset. Assumption 2 is also not straightforward. More cost (e.g., computational resources) does not necessarily equate to better safety. Safety also depends on other factors, such as the learning paradigm, model architecture, loss function design, and hyperparameter selection.\\n\\n**Response to Weakness 2:**\\n\\n- The regulator can use various defenses [Goldblum 2022; Zhao 2024; Gao 2020] to mitigate a wide variety of attacks, including backdoor attacks.\\n- Defending against malicious attacks falls out of the scope of our proposed framework.\\n\\n**Remark:** The goal of our paper is to provide the first mathematically-based regulatory framework to incentivize safer model deployment within the AI regulatory domain. Defending against maliciously-submitted models is an interesting and important future line of work.\\n\\n- There is a cost associated with exploring various design factors, including the learning paradigm, model architecture, loss function design, and hyperparameter selection.\\n\\nIn the examples provided by the reviewer, we agree that the learning paradigm, model architecture, loss function design, and hyperparameter selection do affect safety. However, there is a cost to investigate each of these provided examples. As a result, these all fall under \\\"cost\\\". Determining a relationship between cost and each one of these factors requires a detailed analysis that falls out of the scope of the paper.\\n\\n2. Goldblum, Micah, et al. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses, 2022.\\n3. Zhao, Shuai, et al. A survey of backdoor attacks and defenses on large language models: Implications for security measures, 2024.\\n4. Gao, Yansong, et al. Backdoor attacks and countermeasures on deep learning: A comprehensive review, 2020.\"}", "{\"comment\": \"Thanks to the authors for their response. However, the response to Weaknesses 1 & 2 cannot fully address my concern.\\n\\n**Weakness 1 on the rationality of the auction framework**. What the authors claim, i.e., \\\"there exists a minimum cost incurred by each model-building agent in order to have its model deployed,\\\" is exactly what I pointed out: \\\"Every model-building agent must incur this cost, regardless of whether it can successfully meet the regulatory requirements.\\\" This claim, in my point of view, weakens the rationality of the auction framework.\\n\\nTo clarify my point, in an auction, one can choose not to bid for a certain item, and thus, there can be no cost for them. However, in the context of regulating LLMs, every model-building agent has to pay for the cost of pre-/post-training an LLM to improve the task performance and meet the underlying safety constraints once it starts to develop any LLM. Thus, there is an essential difference between audition and regulation.\\n\\n**Weakness 2 on Feasibility of Assumptions 1 and 2**. Actually, this weakness has also been mentioned by Reviewer dJpW and F4Wy. This confirms my concern on the feasibility of Assumptions 1 and 2. Especially, the discussion on the \\\"cost\\\" definition cannot support Assumption 2 that there exists a strictly increasing function M that maps safety to cost. **This assumption is too strong and not practical**. Although determining a relationship between cost and each one of these factors falls out of the scope of the paper, **the relationship between safety and cost cannot be oversimplified as a strictly increasing function**. What's more, the safety score and the cost score themselves are not even easy to quantify as a scalar in practice since both safety constraints and cost involve many factors, as agreed by the authors.\"}", "{\"title\": \"Reviewer Reply Deadline\", \"comment\": \"Dear Reviewer F4Wy,\\n\\nWe sincerely appreciate your time and effort to review our work. With the deadline for discussion ending in less than 20 hours, we want to make sure that our responses have addressed all of your concerns. Any additional insight is very helpful for us. If all of your concerns have been addressed, we would request a reconsideration of your original score.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Reviewer F4Wy Rebuttal (Part 2)\", \"comment\": \"> **Weakness 4:** This issue could be exacerbated by the presence of open-source models like LLaMA, which may further incentivize the \\\"gaming\\\" of the regulatory system. Agents could enter the competition with low-cost variants of open-source models that prioritize safety at the expense of quality, primarily to secure the regulator\\u2019s compensation. Put it in a different way, low-quality models (which are safe but not useful) could flood the regulatory system, making it easier to claim compensation without delivering valuable AI products. This could distort incentives, where participants optimize for regulatory approval rather than producing high-quality, well-rounded models.\\n\\n**Response to Weakness 4:**\\n\\nWe believe that our response to Weakness 3 clarifies our definition of safety and provides an answer to this Weakness.\\n\\n\\n> **Weakness 5:** For the mechanism itself, a minor concern include the use of randomization, which introduces envy into the mechanism. With development costs potentially huge, this might lead to issues and discontent and distrust with the mechanism after the outcome is realized.\\n\\n**Response to Weakness 5:**\\n\\n- Performing the randomization process multiple times reduces the likelihood of unfair outcomes. \\n\\nIn practice, to avoid the possible unfair scenarios as detailed in the reviewer's question, we can repeat the randomization process $x$ times. For this to work, the regulator will store the number of times each agent has the higher safety bid $n_i$. Then, the regulator will award premium rewards to agents having value $n_i / x$ in the top half of all agents.\\n\\n\\n## Questions\\n\\n---\\n\\n> **Question 1:** The framework assumes that the cost $M$ is the same across agents. This assumption seems unrealistic in practice, given that different agents may have varying models, training procedures, and resources, which makes the cost of aligning the safety levels different. If $M$ differs across agents, is there a way to adapt the framework to accommodate heterogeneous costs while maintaining its theoretical properties?\\n\\n**Response to Question 1:**\\n- The motivation behind Assumption 2 is to generalize the relationship between safety and cost within the domain of AI regulation. \\n\\nIn general, safer models cost more to develop. We agree that there may be some slight discrepancies between agents regarding this relationship in certain settings. \\n\\n- As the first paper proposing a mathematically-based regulatory framework to incentivize safer model deployment within the AI regulatory domain, we believe that cost function discrepancies are secondary to the primary concern of developing frameworks to tackle AI safety regulation. \\n \\nAllowing personalized $M_i$ functions is an active line of research we are conducting for follow-up work.\\n\\n> **Question 2:** The paper didn't mention incentive compatibility, a key issue in auction literature. Is truthful report of $b_i$ guaranteed?\\n\\n**Response to Question 2:**\\n\\n- Truthfulness of $b_i$ is not a major issue within our mechanism since it is verified by the regulator (auctioneer) itself. \\n- Each agent must provide the regulator access to its model in order to verify its safety level.\"}", "{\"title\": \"Reviewer LVmP Rebuttal (Questions)\", \"comment\": \"## Questions\\n\\n---\\n\\n> **Question 1:** What is the technical challenge in the considered auction problem for AI models, compared to classic auction problems?\\n\\n**Response to Question 1:**\\nWe answer this question in Weakness 2 above.\\n\\n> **Question 2:** Practical AI models are often very large. How can the safety of these model be evaluated? Given that the auction is done in a one shot setting, probably it is fine even if the model is large.\\n\\n**Response to Question 2:**\\nWe answer this question in Weakness 1 above.\\n\\n> **Question 3:** I am more concerned about the compensation $v_i^p$, which needs to be provided by a regulator to implement the proposed auction algorithm. Why is this practical for existing AI models? How large does the compensation need to be? According to bidding equilibrium in Theorem 2, $v_i^p$ needs to be large for safer models. How could this be made up to compensate what the commercial AI models could achieve?\\n\\n**Response to Question 3:**\\n- If a regulatory body and framework are established, all existing models would have to pass through them before continued use.\\n\\nAny existing models that do not meet the safety threshold would be barred from deployment (with the threat of governmental action). \\n\\n- Premium rewards provide incentive for agents that have existing models to make their model safer. \\n\\nFor example, if the premium reward is a tax credit coupled with fast-tracked model deployment, Google, for example, may try to bid such a safe model that it is cleared to be deployed faster than one of its rivals, say OpenAI. In this way, the premium rewards still provide incentive for the agents of existing models to train them to be even safer.\\n\\n- The size of the premium reward depends upon the monetary limits of the regulator.\\n\\nThe reviewer is correct that larger premium reward values $v_i^p$ correspond to the safer models submitted to the regulator (Theorem 2). As a result, the regulator should try to increase the value of its premium reward to be as large as possible. However, there is a limit to what regulators can offer agents. For example, regulators are not able to offer millions of dollars to each agent that builds a safe model. Thus, the value $v_i^p$ depends upon the monetary limits of the regulator.\"}", "{\"summary\": \"This paper presents a new AI regulatory framework known as the Safety-Incentivized Regulatory Auction (SIRA), designed as an all-pay auction. SIRA aims to motivate model-building agents to prioritize safety beyond a minimum threshold by formulating the AI regulatory process as an asymmetric all-pay auction with incomplete information. In this framework, agents submit their models to a regulator, and those that meet or exceed a specified safety level become eligible for deployment and may also receive additional rewards based on the randomized pair comparison result. The authors theoretically prove that under some assumptions and when all agents adopt a certain strategy, the system reaches a Nash Equilibrium. Empirical results indicate that when safety threshold prices are in the middle (0.2~0.8), SIRA enhances safety compliance and agent participation by 20% and 15%, respectively compared with the basic regulatory method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**Originality.** The approach presents a unique use of all-pay auction mechanisms in AI regulation, where each agent's utility is linked to model safety levels (training cost), model value (market returns), and premium (policy compensation), creating an incentive for improved safety compliance.\\n\\n**Quality.** The paper theoretically derives Nash Equilibria to back the proposed incentive structure, demonstrating that agents' rational behavior leads them to exceed the minimum safety threshold. The experimental results align with the theoretical model.\\n\\n**Clarity.** This paper is well-written and easy to follow. The authors provide clear descriptions of the auction-based model and detailed steps in the algorithmic design of SIRA, supported by both theoretical and empirical validation.\\n\\n**Significance.** This paper tries to tackle an essential issue in AI regulation by encouraging safer model deployment.\", \"weaknesses\": \"**Rationality of the auction framework.** Considering the regulation process as an all-pay auction is not reasonable, at least in my opinion. Intuitively, safety-driven regulation establishes a minimum cost for the model-building agent. Every model-building agent must incur this cost, regardless of whether it can successfully meet the regulatory requirements. This represents an unavoidable exploration process within the model space. Even if we assume that all competitive agents know how to train their models to meet the safety threshold, accurately estimating the value of deployment remains a challenge. Thus, the framework may be overly simplistic in its approach to \\\"safety\\\" regulation.\\n\\n**Feasibility of Assumptions 1 and 2.** Assumption 1 fails when a model-building agent maliciously injects backdoor triggers into the model by altering the training dataset. Assumption 2 is also not straightforward. More cost (e.g., computational resources) does not necessarily equate to better safety. Safety also depends on other factors, such as the learning paradigm, model architecture, loss function design, and hyperparameter selection.\\n\\n**Performance at high thresholds.** As highlighted in the experiments, SIRA demonstrates limited advantages when safety thresholds approach the upper range (e.g., above 0.8), where its performance is similar to that of simpler reserve threshold models.\\n\\n1. Evan Hubinger, et al., Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, arXiv:2401.05566.\", \"questions\": \"**Q1.** Is there a reasonable mechanism for estimating the market value ($v_i^d$) of a model before it is submitted to the regulator or even before the training phase begins?\\n\\n**Q2.** Considering that SIRA\\u2019s performance deteriorates at high safety thresholds, would a simple increase in the threshold serve as a better incentive in such cases, as it may more directly encourage safer model development?\\n\\n**Q3.** The authors mention that safety evaluations rely on IID assumptions for both agent and regulator data. How would the proposed mechanism adapt to non-IID settings, where the agent's training data might be maliciously poisoned, or where the regulator's evaluation data is collected through other means?\\n\\n**Q4.** Is the random comparison fair for all competitive agents? For example, if we have utility values such that $u_A > u_B > u_C > u_D$, and A and B are grouped together while C and D are grouped together, then B and D cannot receive the policy bonus. However, since $u_B > u_C$, this situation could be considered unfair to B.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a novel framework of auction-based regulatory mechanism as an asymmetric and incomplete all-pay auction. The mechanism is described mathematically and also shows good empirical results of enhancing safety and participation rates. The framework consists of a regulator and multiple participating agents. Overall, this is an interesting framework with good potentials to explore and create safer and more robust AI regulatory.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and well-supported by both theoretical proofs and empirical results. It addresses the important area of AI regulatory via a multi-agent economic, game-theory type framework. There are a few assumptions to simplify the mechanism but they appear to be acceptable/realistic such as i) the regulator and the participating agents use data from the same distribution to evaluate and submit the safety level, and ii) safer models cost more to develop. These assumptions perhaps need more clarification/grounding or adjustment to become more applicable and feasible in practice. A safer model can tend to cost more to develop but perhaps cost and safety might not always be strictly increasing. The paper has help enhance the current AI regulatory work with a well-formulated framework and has a potential to have some significance in this domain.\", \"weaknesses\": \"While the paper is well-supported in the mathematical formulation and proofs, it perhaps could have provided more evidence on the experiments and empirical data. More description of how this framework can be applied in AI regulatory or in practice might help ground it further and make it relevant to a wider group of audiences.\", \"questions\": [\"What is the rationale of choosing the Beta and Uniform distribution (beyond what is described in line 323-324). Are there any related works that you could cite to support this choice of distributions?\", \"What is the scaling of complexity and cost (such as evaluation and communication) as the number of the agents increase? Are there any risks of agents colluding to achieve a suboptimal safety level?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewer dJpW Rebuttal (Part 2)\", \"comment\": \"> **Weakness 3:** Performance at high thresholds. As highlighted in the experiments, SIRA demonstrates limited advantages when safety thresholds approach the upper range (e.g., above 0.8), where its performance is similar to that of simpler reserve threshold models.\\n\\n**Response to Weakness 3:**\\n\\n- Reserve Thresholding (RT) is also a novel mechanism that we propose within our paper.\\n- SIRA provably outperforms RT for bidding size and participation rate across *all* $\\\\epsilon$ ranges.\\n- SIRA is a more realistic and robust framework, as it can be used across various settings where the $\\\\epsilon$ threshold can be vastly different.\\n\\n While SIRA empirically demonstrates limited advantages at the upper range of safety thresholds versus RT, it still provably improves the bidding size compared to RT (albeit small). Below, we compare the participation rate and bidding size between SIRA and RT (in the Uniform setting).\\n\\n| $\\\\epsilon$ Range | SIRA Participation | SIRA Bid | RT Participation | RT Bid |\\n| -------- | ------ | -------- | -------- | --------| \\n| (0, 0.2) | **86.273%** | **0.145** | 85.352% | 0.105|\\n| (0.2, 0.4) | **61.788%** | **0.358** | 57.640% | 0.305 |\\n| (0.4, 0.6) | **38.414%** | **0.567** | 30.402% | 0.505|\\n| (0.6, 0.8) | **15.514%** | **0.753** | 10.297% | 0.705 |\\n| (0.8, 1.0) | **1.633%** | **0.903** | 1.424% | 0.900 |\\n\\nAs expected from the results of our theory (Section 5), SIRA outperforms RT in participation rate and bid size across all $\\\\epsilon$ ranges. \\n\\n## Questions\\n\\n---\\n\\n> **Question 1:** Is there a reasonable mechanism for estimating the market value ($v_i^d$) of a model before it is submitted to the regulator or even before the training phase begins?\\n\\n**Response to Question 1:**\\n- In Auction literature [Amann 1996; Bhaskar 2018; Tardos 2017], agent valuations are private and arise from nature; no mechanism estimates model deployment value. \\n- It is realistic for many companies to place a market value on its own intellectual property and products.\\n\\nIn practice, these valuations are determined in house. For example, Google may perform market research to determine the value (revenue generation) of a model, like Gemini, before it is released. \\n\\n> **Question 2:** Considering that SIRA\\u2019s performance deteriorates at high safety thresholds, would a simple increase in the threshold serve as a better incentive in such cases, as it may more directly encourage safer model development?\\n\\n**Response to Question 2:**\\n\\n- Increasing the threshold only discourages agents with lower total value $V_i$ from participating.\\n\\nAs one can see in Figures 2 & 3, raising the threshold results in lower participation (while the bids increase in size).\\n\\n- A straightforward method to incentivize safer model deployment would be to increase the premium reward.\\n\\nIncreasing the premium reward would shift the probability mass of total value $V_i$ towards 1 for agents. Consequently, more agents would have values closer to 1, which results in more agents willing to train a model that is able to clear the higher safety threshold.\\n\\n> **Question 3:** The authors mention that safety evaluations rely on IID assumptions for both agent and regulator data. How would the proposed mechanism adapt to non-IID settings, where the agent's training data might be maliciously poisoned, or where the regulator's evaluation data is collected through other means?\\n\\n**Response to Question 3:**\\n- As detailed in our Future Work (Section 7), one possible solution is the requirement that data must be shared (in a private and anonymous manner) between each agent and the regulator. \\n- Another possible solution would be the regulator collecting more data on its own (with possible assistance from agents). \\n- The regulator can employ various defenses to mitigate malicious attacks (see response to Weakness 2).\\n\\n> **Question 4:** Is the random comparison fair for all competitive agents? For example, if we have utility values such that $u_A > u_B > u_C > u_D$, and A and B are grouped together while C and D are grouped together, then B and D cannot receive the policy bonus. However, since $u_B > u_C$, this situation could be considered unfair to B.\\n\\n**Response to Question 4:**\\n\\n- Performing the randomization process multiple times reduces the likelihood of unfair outcomes. \\n\\nIn practice, to avoid the possible unfair scenarios as detailed in the reviewer's question, we can repeat the randomization process $x$ times. For this to work, the regulator will store the number of times each agent has the higher safety bid $n_i$. Then, the regulator will award premium rewards to agents having value $n_i / x$ in the top half of all agents.\"}" ] }
06B23UkNid
MV-CLAM: Multi-View Molecular Interpretation with Cross-Modal Projection via Language Model
[ "Sumin Ha", "Jun Hyeong Kim", "Yinhua Piao", "Sun Kim" ]
Large language models (LLMs) have shown significant potential in the biomolecular domain, particularly by demonstrating that effective adaptation of molecular representations for LLMs can greatly improve the quality of molecular captions. Most previous works have focused on aligning unimodal molecular structures with text, overlooking the diversity of modalities. Naive approaches to aligning multi-modal molecular structures with text often lead to (1) separately aligned embeddings, (2) inconsistent textual representations, and (3) increased computational overhead. To address these challenges, we propose LLM framework MV-CLAM equipped with MQ-Former, a novel multi-querying transformer. This architecture introduces a cross-model projector facilitating the simultaneous alignment of 2D and 3D molecular representations to a unified text token. By employing a shared self-attention layer, MQ-Former preserves rich molecular embeddings across different dimensions while consolidating them into a universal molecular token. Our approach outperforms baseline models in both molecule-text retrieval and molecule captioning tasks. Additionally, our framework shows promising results for zero-shot molecule editing and molecule-related question answering. By effectively integrating multi-view molecular data into a format conducive to LLMs, our method serves as a valuable tool for enhancing the characterization and understanding of chemical structures, facilitating a more seamless transition from molecular data to textual descriptions. The source code of MV-CLAM is available in https://anonymous.4open.science/r/mv-clam-4827.
[ "Molecule captioning", "large language models", "drug discovery", "molecule representation learning" ]
Reject
https://openreview.net/pdf?id=06B23UkNid
https://openreview.net/forum?id=06B23UkNid
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xFH1b5GKel", "urMoE6cGDv", "roevA7DeCk", "pGxfq6ytBW", "nrpwkVrbYm", "kg6Y1EUO5V", "iv6L9bA1FX", "hefprGvv7n", "afGkng9KXw", "SZor4X9PgZ", "SPDtsIgtdN", "R9lU7kLwna", "ONgTxFBfKJ", "NTmY2mznW5", "NRONpPvMs4", "MECE7Z1lpg", "Hjm6UxfqRI", "HAIz4wygLn", "GaC6FNilW1", "Fjr77hjI7B", "CyIKONxnIj", "9eASKgi5al", "8bfabZVPDz", "8N0rcKyXhe", "3W3HYZZb6Q" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1737524250776, 1732440019523, 1732440207618, 1732441896730, 1732782345501, 1732443874232, 1732701955449, 1730284339594, 1732441887245, 1732443921924, 1730282154723, 1732782979273, 1732441726109, 1734185977860, 1732444572501, 1730656252504, 1732442788956, 1732441330876, 1732439441090, 1733082537675, 1732439534060, 1732444145057, 1730557631368, 1732442493822, 1732782541727 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Reviewer_6JAy" ], [ "ICLR.cc/2025/Conference/Submission13305/Reviewer_48Rx" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Reviewer_R1Xc" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Area_Chair_ykyg" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Reviewer_6JAy" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Reviewer_48Rx" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Reviewer_vfcG" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ], [ "ICLR.cc/2025/Conference/Submission13305/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"General Response (2). Concerning the CheBI20 dataset\", \"comment\": \"We appreciate the reviewer\\u2019s comments and acknowledge the importance of using diverse and extensive datasets for evaluating the generalizability of our model. In this work, we chose to validate our MV-CLAM framework for molecule captioning solely on the PubChem324k dataset, **following the baseline approach (3D-MoLM) we built upon**, which also reports captioning performances on the PubChem324k dataset.\", \"our_decision_to_exclude_the_chebi_20_dataset_was_primarily_motivated_by_the_following_considerations\": \"1. Data Redundancy and Leakage Concerns\\n> ChEBI-20 is derived from PubChem324k dataset, with additional manual curation for specific biological contexts. Since ChEBI-20 is essentially a subset of PubChem, there is an inherent overlap between the two datasets. This overlap raises potential concerns about data redundancy and leakage when training and evaluating on these datasets together.\\n\\n2. Evaluation of Molecular Nomenclature\\n> Unlike the PubChem324k dataset, which retains molecular names and provides a broader variety of molecular structures, ChEBI-20 replaces molecular names with generic placeholders such as \\u201cthe molecule.\\u201d While this emphasizes molecular properties, it limits the evaluation of the model\\u2019s ability to connect structural features with accurate molecular nomenclature. Names often encode critical structural information (e.g., functional groups, stereochemistry, or ring systems), making them an essential aspect of evaluating a model\\u2019s understanding of molecular structures.\\n\\nTaking these considerations into account, we believe that using the **PubChem dataset provides a rigorous and comprehensive evaluation of our framework\\u2019s capabilities in text retrieval, molecule captioning, and downstream tasks**. We have included a detailed discussion in **Appendix A.2** on our dataset selection process and the rationale for excluding ChEBI20. We emphasize that *validating our captioning performance on the PubChem dataset alone is sufficient, as it offers a significantly larger and more diverse set of molecular descriptions, enabling a robust assessment of the generalizability and efficacy of our model*. We deeply appreciate the reviewers for thoughtful suggestions. \\n\\n\\nReferences\\n> [1] Li et al., \\\"Towards 3D Molecule-Text Interpretation in Language Models\\\". [2] Zhang et al., \\\"UniMoT: Unified Molecule-Text Language Model with Discrete Token Representation\\\" [3] Liu et al., \\\"Multi-modal Molecule Structure-text Model for Text-based Retrieval and Editing\\\" [4] Liu et al., \\\"MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter\\\". [5] Cao et al., \\\"InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug Discovery\\\". [6] Xiao et al., \\\"MolBind: Multimodal Alignment of Language, Molecules, and Proteins\\\" [7] Christofidellis et al., \\\"Unifying Molecular and Textual Representations via Multi-task Language Modelling\\\" [8] Liu et al., \\\"GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text\\\" [9] Zhang et al., \\\"Atomas: Hierarchical Alignment on Molecule-Text for Unified Molecule Understanding and Generation\\\" [10] Cao et al., \\\"PRESTO: Progressive Pretraining Enhances Synthetic Chemistry Outcomes\\\" [11] Ganeeva et al., \\\"Lost in Translation: Chemical Language Models and the Misunderstanding of Molecule Structures\\\" [12] Phan et al., \\\"SciFive: a text-to-text transformer model for biomedical literature\\\" [13] Livne et al., \\\"nach0: Multimodal Natural and Chemical Languages Foundation Model\\\"\"}", "{\"title\": \"General Response (3). Concerning novelty & technical contributions\", \"comment\": \"We sincerely thank the reviewers for their valuable feedback regarding the novelty and technical contributions of MQ-Former, and we apologize for not clearly conveying these aspects in our initial submission. MQ-Former *aligns molecular and text spaces by sharing self-attention layers between each multi-view structural information (2D and 3D) and text*, generating a universal query token interpretable to LLMs. The **simultaneous yet separate alignment of 2D and 3D structural representation to text**, is only applicable with our novel MQ-Former, and ensures a balanced incorporation of abundant structural information among the differing views. This design addresses key challenges in multi-modal learning to *align multiple modalities with minimal information loss*. We have demonstrated this through our case study of captions, attention visualization and ablation study of using single view molecular embeddings mentioned in Section 6.3.\\n\\nTo further address your comments and demonstrate the uniqueness of our approach, we have conducted two additional studies: 1) Embedding space visualization to demonstrate MQ-Former preserves modality-specific information in accordance to textual semantics. 2) Comparison ablation that utilizes multi-view molecular embeddings (2D+3D embeddings concatenated) in the former Q-Former framework. This necessitates the usage of employing MQ-Former with an additional branch, differentiating it from Q-Former. \\n\\n1. **Embedding Space Visualization**\\n\\n> We analyzed the embeddings of the 2D, 3D queries, and our universal query tokens in the latent space alongside the corresponding word embeddings from textual descriptions. Specifically, we focused on highly 2D- or 3D-related words from the textual captions described in Appendix Figure 6-7 (Case Study 1. Attention visualization). \\nFor 2D-related words, the distance between the embeddings followed the trend: 2D < 2D+3D < 3D.Conversely, for 3D-related words, the trend was: 3D < 2D+3D < 2D.These observations confirm that MQ-Former successfully preserves modality-specific information while aligning it with textual semantics, highlighting the interplay between 2D and 3D molecular views in multi-modal learning. We have included the visualization results in **Appendix Figure 7**.\\n\\n2. **Comparison with Multi-View Representations in Q-Former Framework**\\n\\n> To highlight the necessity of MQ-Former, we conducted an ablation study comparing our architecture with a variant that aligns multi-view molecular representations using a single Q-Former module. The suggested MolMix [14] (a multimodal representation learning framework) by reviewer 6JAy lacks pretrained weights for molecular embeddings. Hence, we leveraged the 2D embeddings from MAT and the 3D embeddings from Uni-Mol, concatenating these representations before projecting them into the textual space using the Q-Former. We combined the results with the former ablation conducted in Table 3. \\n\\n> Overall, the study emphasizes that while single-view embeddings (e.g., 2D or 3D alone) capture important molecular information, they lack the comprehensive representation needed for captioning tasks requiring multi-faceted insights. Moreover, unlike the concatenation-based approach, MQ-Former preserves the rich, distinct representations of molecular views. That is, *the simultaneous alignment of these embeddings in the shared textual space enhances the preservation of intricate molecular properties*. This design facilitates more fine-grained alignment with text, maintaining diversified information, which results in higher-quality captions across all evaluated metrics (Table R4). Overall, MQ-Former enables the preservation of detailed and diverse molecular representations, facilitating precise alignment with textual descriptions and delivering superior performance across captioning task.\\n\\n**Table R4**. Comparison of molecule captioning performance: multi-view embeddings aligned with Q-Former\\n| | BLEU2 | BLEU4 | METEOR | ROUGE1 | ROUGE2 | ROUGE-L |\\n|----------------------------|--------|--------|--------|--------|--------|--------|\\n| **2D + Q-Former** | 29.72 | 22.26 | 34.22 | 38.22 | 23.45 | 31.61 |\\n| **3D+ Q-Former** | 29.45 | 22.03 | 33.79 | 37.86 | 23.11 | 31.83 |\\n| **Multi-view (2D&3D) + Q-Former** | 29.80 | 22.70 | 35.49 | 39.07 | 24.92 | 33.09 |\\n| **MV-CLAM** | **31.75** | **24.48** | **36.54** | **40.43** | **25.72** | **33.79** |\\n\\nThis novelty ensures more precise molecule-text understanding, distinguishing *MQ-Former as a robust and effective solution for molecular captioning*. We appreciate the opportunity to clarify our contributions and have incorporated these findings into the revised manuscript organized in **Appendix A.4. Effectiveness of MQ-Former**.\\n\\n\\nReference \\n> [14] Manolache, Andrei, Dragos Tantaru, and Mathias Niepert. \\\"MolMix: A Simple Yet Effective Baseline for Multimodal Molecular Representation Learning.\\\"\"}", "{\"title\": \"Response to Reviewer 6JAy's comments - 3\", \"comment\": \"**Question on 3D conformer construction**\\n1. Is it ETKDG geometry generation with further MMFF optimization?\\n\\n> Yes, the 3D conformers in our study were generated using RDKit\\u2019s ETKDG (Extended-Torsion Distance Geometry with additional restraints) method, which incorporates stereochemical rules and experimental torsion-angle preferences for more realistic initial geometries. After embedding the conformer with ETKDG, further refinement was performed using the MMFF force field to optimize bond lengths, angles, and torsional strains, resulting in lower-energy conformations. This process is widely used in prior studies involving 3D molecular modeling and prediction tasks, as it produces chemically meaningful and energetically favorable conformers. Thank you for the opportunity to clarify the 3D conformer construction process. We have now briefly described the process in the PubChem324K section (5.1 Datasets).\\n\\n2. Since it is possible to generate several different conformers for a single molecular structure, did you assess the dependence of the model quality on the conformations?\\n\\n> We acknowledge that a single molecular structure can have multiple valid conformers, each representing a different local energy minimum. In this study, we used one representative conformer per molecule, specifically the lowest-energy conformer obtained after MMFF optimization. While we did not explicitly assess the dependence of model quality on different conformers, this single-conformer approach is consistent with many previous works regarding molecule captioning. Ensuring the use of a chemically plausible, low-energy conformer minimizes variability across datasets. Evaluating the impact of multiple conformers or conformer ensembles on model performance could be a valuable direction for future research.\\n\\n3. Is it necessary to optimize a conformer generated with ETKDG using MMFF?\\n\\n> Although ETKDG generates high-quality 3D conformers with chemically meaningful geometries, additional MMFF optimization is a common practice in prior studies to further refine conformers by minimizing steric clashes and optimizing geometries in terms of potential energy. This step ensures that the conformers better approximate their true physical structures, which can improve the downstream prediction of molecular properties. Therefore, MMFF optimization was considered an essential step in our workflow to align with established best practices and enhance model reliability.\"}", "{\"title\": \"Response to Reviewer 6JAy's comments 2 - (1/3)\", \"comment\": \"We appreciate your thorough review of our rebuttal and the additional insights you have provided. Your feedback has given us an opportunity to further clarify our approach, and we hope our explanations address any remaining concerns effectively.\\n\\n> 1. Why do the metrics from the UniMolT paper for MolCA models differ so much from the original paper? I suppose relying on an unpublished paper is not a good practice.\\n\\nAs shown in **Table R1**, the performance discrepancies for MolCA compared to the original paper are **entirely expected due to differences in *dataset composition***. Naturally, baseline selections vary across all models because each uses distinct datasets, and the results presented in our paper are based on consistent dataset configurations **following 3D-MoLM**.\\nWhile using PubChem as the source, MolCA's preprocessing pipeline results in a *smaller dataset*. Compared to the number of molecules in the PubChem dataset 3D-MoLM, UniMoT, and our model use, MolCA has approximately 3k fewer molecules in the pretraining subset alone. This distinction likely explains why 3D-MoLM, published after MolCA's release, does not include MolCA as a baseline.\\n\\nTo ensure a fair and comprehensive comparison, we reproduced the results of MolCA under the same dataset setting and obtained similar results to those reported by UniMoT. Notably, UniMoT, which is currently under review for ICLR2025, has provided the most reliable reproduction of MolCA's performance under comparable conditions, and there have been no questions raised about the reliability of its reproduced results in the ongoing review discussions.\"}", "{\"title\": \"Response to Reviewer R1Xc's comments - 2\", \"comment\": \"**2. Molecule generation: LLaMA2 based models:** Thank you for the thoughtful feedback. We chose LLaMA2 as the base large language model due to its strong performance in molecule-text modeling, as demonstrated in related works like 3D-MoLM and UniMoT. LLaMA2\\u2019s ability to leverage a large academic corpus aligns with our goal of developing a refined chemical foundation model. While MQ-Former is model-agnostic and can work with other architectures (e.g., T5-based models), we prioritized consistency with prior studies for fair comparisons, emphasizing the performance of our novel cross-modal projector.\\n\\nHowever, an important limitation of using LLaMA2 is that its tokenizer is not optimized for generating chemical SMILES representations. This limitation arises because LLaMA tokenizers are pretrained on general-purpose and academic text corpora, which do not include specialized tokenization for chemical structures like SMILES. As a result, our primary downstream task focuses on molecule captioning and molecule-text alignment rather than molecule generation. Previous models based on LLaMA (e.g., 3D-MoLM) also do not focus on SMILES generation without additional tokenizer modifications or pretraining specific to chemical domains.\\n\\nWhile the LLaMA2 tokenizer is not explicitly trained to generate chemical SMILES representations, we sought to demonstrate the flexibility and potential of our MQ-Former architecture by exploring **zero-shot molecule editing** as an additional capability. In our experiments, we showcased instances where the model successfully edited molecule SMILES in a zero-shot setting, highlighting the generalization ability of MV-CLAM and its capacity to handle SMILES generation through the raw LLaMA2 tokenizer without further specialization. This approach represents, to the best of our knowledge, the first attempt to generate and edit molecular SMILES using the unmodified LLaMA tokenizer, directly bridging the gap between chemical structure representations and pretrained large language models. We have revised Section 6.5 and Appendix A.3 in the manuscript to incorporate these insights.\"}", "{\"comment\": \"Thank you for clarifying some of my concerns.\\nStill, the following remains unclear:\\n1. Why do the metrics from the UniMolT paper for MolCA models differ so much from the original paper? I suppose relying on an unpublished paper is not a good practice. \\n2. The same questions for 3D-MoLM models in the Q&A task. The metrics in the original paper are significantly better\\n3. The setup of zero-shot editing stays unclear. What are the metrics of this task?\"}", "{\"summary\": \"The paper introduces MV-CLAM, a framework utilizing a novel multi-querying transformer (MQ-Former) to enhance the alignment of multi-modal molecular representations with text. By employing a shared self-attention layer, this approach effectively consolidates 2D and 3D molecular data into query tokens, improving performance in molecule-text retrieval and captioning tasks. Additionally, it demonstrates potential for zero-shot molecule editing and molecule-related question answering, thereby facilitating better characterization of chemical structures.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The description of the proposed methodology is easy to follow. The paper is well written in general.\", \"The paper introduces a promising multi-view for approach for the infusion of specialized chemical knowledge into general-purpose pre-trained LLMs.\", \"The proposed MV-CLAM achieves state-of-the-art on PubChem324K for molecule captioning and retrieval tasks.\"], \"weaknesses\": [\"The experimental evaluation of the proposed method is conducted on a single dataset for both task: molecule captioning and molecule-text retrieval.\", \"The list of baseline models on molecule captioning only includes a single T5 language model while there are more recent works, including: nach0 and Text+ChemT5.\", \"Some implementation decisions are not justified well enough. This includes: (i) the choice of SciBERT as a language encoder for MQ-Former; (ii) the choice of 2D and 3D encoders; (iii) introduction of $K$ query tokens instead of a single query token for each view; (iv) the choice of LLaMA2 as an LLM. It is unclear how the experimental results would change if each of the mentioned models is replaced with another one.\", \"Incomplete ablation study. The necessity of (i) Molecule-text Contrasting and (ii) Molecule-text Matching losses is not proven experimentally. For (i), it is unclear whether two loss components required or the model will perform well with a single one. For (ii), the impact of negative sample is under-explored.\", \"The effect of most hyper-parameters in the method's module on the resulting performance is understudied. For instance, query token count, negative sample count in MTM loss.\", \"The methodology for molecule-text retrieval is unclear from the paper.\", \"The applicability of the proposed methodology to broader list of datasets is questionable: it requires 2D/3D molecular data in addition to simple SMILES string representations.\"], \"questions\": [\"Add experimental comparison against more chemical language models on molecule captioning, e.g., nach0 [1], Text+Chem T5 [2], SciFive [3], PRESTO [4], GitMol [5].\", \"For retrieval task (Table 1), is it possible to add chemical BERT-based encoders in addition to textual encoder SciBERT? (e.g., ChemBERTa)\", \"Conduct additional experiments on other molecule captioning datasets such as Mol-Instructions [6] and CheBI20 [7].\", \"For molecule-text retrieval, do you adopt a generative approach (e.g., GENRE [8]) or the task is formulated as a cross-modal embedding-based search by similarity (e.g., as in [9])?\", \"In Figure 3, where does the textual description come from during prediction on a test set? As far as I understand the molecule captioning task, you are only given a SMILES string.\", \"What is the LLaMA version you use? Add adopted HuggingFace checkpoints.\", \"Even if you adopt a LLaMA with 7B parameters, MolT5 has less than 1B. Could not we just scale MolT5 to 3-5B parameters and obtain a better molecule captioning quality?\", \"Why is MolT5 absent from the Table 1?\", \"Add ablation study for SciBERT, 2D/3D molecule encoders, LLaMA2.\", \"Add ablation study for training losses. For Molecule-text Contrasting loss, prove it requires two components. For Molecule-text Matching loss, explore the effect of negative samples.\", \"Is it possible to generalize the methodology to unseen datasets and unseen SMILES? Given a SMILES, can I always obtain its 2D/3D representation and apply a pre-trained MV-CLAM model?\"], \"typos\": [\"Line 102: transformer -> Transformer, Add reference.\", \"Line 194: **$A$** under-specified.\", \"Line 234: Missing citation for LoRA.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 6JAy's comments - 2\", \"comment\": \"**QA results unavailable in main text**: Thank you for your valuable comment. We have evaluated the QA experiment results by comparing our framework, MV-CLAM\\u2014integrating the Multi-querying Transformer module with both 3D and 2D molecular encoders\\u2014against frameworks that use a single molecular encoder. Specifically, we compared it to:\\n\\n- 3D-MoLM: Q-Former with a 3D molecular encoder\\n- 2D-MoLM: Q-Former with a 2D molecular encoder\\n\\nFor each case, including our framework, we reproduced QA results for both non-3D properties (Molecular Weight, LogP, Complexity, and Topological Polar Surface Area) and 3D properties (HOMO, LUMO, HOMO-LUMO gap, and SCF energy), as shown in Tables R5 and R6.\\u00a0\\n\\n**Table R5.** Comparison of QA results for non-3D molecular property prediction. The values in parentheses indicate the validity score. \\n| Model | Molecular Weight | LogP | Complexity | Topological Polar Surface Area |\\n|-------------------|-----------------------|------------|------------------|-------------------------------------|\\n| 2D-MoLM | 47.51 (0.98) | 0.89 (0.99)| 110.78 (0.99) | 16.65 (0.99) |\\n| 3D-MoLM | 42.76 (0.96) | 1.25 (0.96)| 105.03 (0.96) | 20.97 (0.92) |\\n| MQ-Former (Ours) | **21.35 (0.92)** | **0.69 (0.94)** | **55.14 (0.91)** | **9.65 (0.91)** |\\n\\n\\n**Table R6.** Comparison of QA results for 3D molecular property prediction. The values in parentheses indicate the validity score.\\n| Model | HOMO | LUMO | HOMO-LUMO | SCF Energy |\\n|-------------------|------------|------------|----------------|-----------------|\\n| 2D-MoLM | 0.78 (0.99)| 0.47 (0.99)| 0.39 (0.90) | 0.98 (1.00) |\\n| 3D-MoLM | 0.42 (0.99)| 0.44 (0.98)| 1.26 (0.99) | 1.22 (0.98) |\\n| MQ-Former (Ours) | **0.35 (0.98)** | **0.42 (0.93)** | **0.35 (0.99)** | **0.32 (0.99)** |\\n\\n\\nBold text indicates the best performance. As shown in the tables, our framework with MQ-Former achieved the highest scores in both non-3D and 3D molecular property QA tasks by effectively utilizing molecular information from both dimensions. We have now properly updated the result tables as Table 5 in the revised manuscript.\"}", "{\"title\": \"Response to Reviewer R1Xc's comments - 3\", \"comment\": \"**Image/Formatting**: Thank you for your feedback regarding the presentation of the paper. We have carefully revised the plots to enhance their readability by adjusting the formatting and increasing font sizes for clarity. Especially for the image in Appendix A.6. Zero Shot Molecule Editing, we have separated the image to Appendix Figures 9~12 with captions for clarity.\\n\\n**Training loss weight ablation**: We truly value your detailed comments. To assess the impact of the weighting in the multi-objective training loss, we conducted experiments validating its effect on Stage 1 metrics, specifically molecule-text retrieval performance on the pretraining dataset. These evaluations were conducted at epoch 10. Based on the preliminary results observed at epoch 10, we analyzed the impact of loss weighting on molecule-text retrieval metrics (M2T and T2M), as shown in Table R9. The table indicates a clear tendency: amplifying the language model (LM) loss weight by a factor of 2 improves both accuracy (ACC) and recall at rank 20 (R@20) across the evaluated tasks. Specifically, we observed that the ACC for M2T increased from 69.87 to 70.90, and for T2M, from 69.26 to 71.15. Although there was a slight decrease in R@20 for M2T (from 97.75 to 96.98), T2M R@20 remained stable.\\n\\nThese preliminary findings suggest that amplifying the LM weight helps to better align the molecular and textual representations, leading to improved performance. Consequently, we adopted this adjusted weighting strategy for the subsequent stages of our experiments. We have organized additional ablation studies and its results in Appendix A.7.\\u00a0\\n\\n**Table R9.** Ablation Study: Training loss weights\\n| Model | M2T ACC | M2T R@20 | T2M ACC | T2M R@20 |\\n|-------|---------|----------|---------|----------|\\n| Lm*1 | 69.87 | 97.75 | 69.26 | 95.55 |\\n| Lm*2 | 70.90 | 96.98 | 71.15 | 95.96 |\"}", "{\"summary\": \"The paper proposes MQ-Former, an extension of the Q-Former framework, which incorporates a multi-query mechanism for aligning both 2D and 3D molecular data with textual information for enhanced molecule-text retrieval and molecule captioning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper aims to enhance cross-modal alignment by integrating 2D and 3D molecular views.\", \"The model demonstrates improvements in molecule-text retrieval and captioning performance over baseline models.\", \"The paper includes case studies and examples of zero-shot molecule editing.\"], \"weaknesses\": [\"The model lacks significant innovation, as MQ-Former primarily adds an extra branch to the existing Q-Former with only minor variations in training objectives.\", \"Experiments are restricted to molecule-text retrieval and captioning on PubChem. The paper lacks essential molecular tasks like molecule generation and datasets like ChEBI-20.\", \"The motivation for adding a branch to Q-Former, rather than simply using a 3D molecular encoder like prior works (e.g., 3D-MoLM), is unclear.\", \"The paper\\u2019s presentation could be improved. Plots lack careful formatting, with text that is difficult to read due to small font sizes.\"], \"questions\": [\"How does MQ-Former handle scenarios where 2D and 3D molecular information may not equally contribute to textual descriptions?\", \"Could the authors include more molecular tasks, such as molecule generation or property prediction, to provide a more comprehensive evaluation of MQ-Former?\", \"What impact does the weighting of the multi-objective training loss have on the model\\u2019s performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 6JAy's comments 2 - (3/3)\", \"comment\": \"> 3. The setup of zero-shot editing stays unclear. What are the metrics of this task?\\n\\nWe apologize for any confusion this may have caused. We would like to clarify our main goal again. \\n\\nThe primary goal of MV-CLAM is **molecular captioning** (MolCA, 3d-MoLM, or UniMoT), not molecular generation. We have demonstrated MV-CLAM's effectiveness for molecular captioning through quantitative results. \\nAdditionally, as an *auxiliary case study*, we aim to explore whether the learned model can generalize to molecular generation tasks. This *qualitative experiment* serves as a preliminary investigation, opening to potential future research directions in the area of molecular generation using models designed for captioning. Specifically, we explore how well these tokens align with textual space by directly generating SMILES strings from raw general-purpose LLaMA tokenizers\\u2014an approach attempted for the first time in this context.\\n\\nThe task is evaluated using specific chemical property metrics that correspond to the given instruction prompt. This allows us to qualitatively assess the alignment and efficiency of our generated universal query tokens in guiding meaningful molecular modifications. For instance, in a prompt such as \\u201cincrease water solubility,\\u201d we evaluate the generated molecules based on their logP values, calculated using RDKit. The metric is defined as follows: If the generated molecule has a lower logP value than the original (indicating increased solubility), the result is considered a valid shot.\", \"the_overall_task_is_assessed_based_on_two_criteria\": \"1. Validity of the generated SMILES strings: The ability to construct valid molecular structures.\\n\\n2. Success in achieving the desired chemical property change: As measured by property-specific metrics like logP for solubility or other relevant descriptors depending on the prompt.\\n\\nThese chemical property metrics provide a systematic way to determine whether the generated molecules align with the textual editing instructions, while also showcasing how well the universal query tokens bridge textual and chemical spaces.\\n\\nWe sincerely value your time in reassessing our revisions. If you have any additional comments or suggestions, we would greatly appreciate your continued engagement during this discussion phase.\"}", "{\"title\": \"Response to Reviewer 6JAy's comments - 1\", \"comment\": \"**More baseline models, datasets**: Please kindly refer to \\u201cConcerning SOTA baseline models\\u201d and \\u201cConcerning the CheBI20 dataset\\u201d in General Response.\\n\\n**Incorporating multi-modal molecule encoders**: Please kindly refer to \\u201cConcerning novelty and technical contributions\\u201d in General Response.\\n\\n**Zero-shot editing task clarification**: We apologize for the lack of explanation in defining the implementation of zero-shot molecule editing. To comprehensively evaluate the quality and robustness of molecular query tokens produced by the MQ-Former module, we defined an auxiliary task focusing on SMILES generation guided by chemical properties. The primary objective of this auxiliary task was to assess whether the specialized language model, after training on molecule captioning, could effectively output valid chemical language (i.e., SMILES) without further tokenization. This evaluation helps determine the utility of the molecular query tokens and their alignment with chemical properties in textual descriptions. Furthermore, by editing molecular descriptions through textual prompts, we aimed to test the model's capacity to adapt its outputs and demonstrate the transferability of its learned molecular representations.\\n\\n> **Training Phase**: Fine-tune MV-CLAM to print SMILES directly from molecular universal tokens. Conduct the fine-tuning over 4 epochs, using both the PubChem324k pretraining and training dataset.\\n\\n> **Zero-shot editing evaluation phase**: Provide the model with unseen molecular structures (absent in pretrain/train dataset, provided in dataset described in Appendix) and accompanying chemical prompts (e.g., \\u201cThe molecule is more soluble in water\\u201d). Assess the printed SMILES outputs for chemical property alterations that align with the prompts. (eg. solubility: logP)\\n\\nFor zero-shot editing task clarification, we have now properly included the description on training and evaluation scheme in Section 6.5 and Appendix A.3.\\n\\n**Comparison with conditional generative models**: We acknowledged that the zero-shot editing task has inherent limitations. Due to the use of LLaMA2\\u2019s tokenizer, which is not explicitly trained for SMILES generation, the success rate of generating valid SMILES was not 100%. This underscores the challenges of leveraging a general-purpose tokenizer for such a specialized chemical task. Unlike conditional generative models that are explicitly designed to generate structured outputs like SMILES (e.g., through dedicated tokenizers or architecture adjustments), our approach represents an initial exploration of this capability with a raw, unmodified LLaMA2 architecture. As a result, direct comparisons with conditional generative models are difficult and not entirely fair, as those models are specifically tailored for tasks like molecular generation. Our focus was instead on demonstrating MQ-Former\\u2019s ability to bridge molecular and textual representations in a functional and chemically meaningful way. Future work could address these challenges by training custom tokenizers or fine-tuning specific generative models for more robust SMILES generation.\\n\\nWe appreciate the reviewer\\u2019s understanding of this distinction and providing an opportunity to highlight the novelty and challenges of our approach.\"}", "{\"metareview\": \"This paper proposes a novel multimodal LLM framework, MV-CLAM, for organic chemistry, and MQ-Former, a multi-query transformer model for simultaneous 1D, 2D, and 3D molecular representation learning, aiming to provide a more comprehensive understanding of molecules.\\n\\nThe idea of integrating 1D, 2D, and 3D molecular information is interesting. The design of MQ-Former involves adapting the Q-Former technique from multimodal learning. However, the experimental results are not convincing enough to demonstrate the superiority of the proposed method, due to issues such as missing baselines, insufficient ablation studies, and reliance on a single dataset. Additionally, the technical novelty is somewhat limited. Therefore, I do not recommend the acceptance of this paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewer expressed a lack of enthusiasm for the paper. AC tried to engage the discussion but `Reviewer 6JAy, vfcG, R1Xc` didn't make any response.\\n\\nAC reviewed the paper, along with the comments and responses, to make the final decision.\"}", "{\"title\": \"Response to Reviewer 48Rx's comments - 3\", \"comment\": \"**Ablation Studies for MQ-Former training loss**\\n\\nIn cross-modal contrastive learning frameworks like CLIP [16], symmetric loss functions are used to calculate both image-to-text (i\\u2192t) and text-to-image (t\\u2192i) contrastive losses. This ensures that both modalities (in our case, molecular and textual) are equally optimized for alignment in the shared embedding space.\\nFor our MQ-Former framework, we adopt a similar principle, calculating molecule-to-text (mol\\u2192text) and text-to-molecule (text\\u2192mol) contrastive losses. These are integral components in molecular-text modeling tasks, as they ensure bidirectional alignment between molecular representations and textual descriptions. The total symmetric contrastive loss written in *Equation 4* ensures bidirectional alignment: encouraging molecular representation to match its corresponding text representation while contrasting with other text representations in the batch and vice versa. Together, these components enhance the MQ-Former\\u2019s ability to create robust molecular-text alignments, leveraging both 2D and 3D molecular structures in a shared embedding space.\\nWe plan to delve deeper into the exploration of negative samples in future experiments. Thank you for your valuable insight.\\n\\n**Applicability to unseen datasets**\\n\\nWe acknowledge the concern about the broader applicability of our methodology. Our current dataset only includes SMILES strings and corresponding text descriptions - we preprocess the data to autonomously generate the necessary 2D molecular graphs and 3D conformers based on the SMILES representation using RDKit. This ensures scalability to any dataset containing SMILES strings.\\nAdditionally, the purpose of using MQ-Former is to leverage pretrained molecular encoders, which are specifically designed to handle 2D and 3D molecular representations. Molecular embeddings to be given as input to MQ-Former is generated automatically using the pretrained encoders. Hence, we can process unseen SMILES strings to generate the required molecular representations (2D graphs and 3D conformers) and subsequently output captions. Thus, while our approach requires intermediate molecular representations, it is fully compatible with datasets as long as they provide SMILES strings.\\n\\n**Explanation on the methodology for molecule-text retrieval**\\n\\nFor molecule-text retrieval, our approach is formulated as a cross-modal embedding-based search by similarity. As shown in our implementation, we calculate similarity scores between graph (molecular) embeddings and text embeddings using matrix multiplication, rank items based on these scores, and evaluate performance using ranking metrics such as accuracy and recall. This approach aligns with methods that leverage embedding similarity for retrieval, rather than a generative framework like GENRE.\\n\\n**In Figure 3, where does the textual description come from during prediction on a test set? As far as I understand the molecule captioning task, you are only given a SMILES string.**\\n\\nFor clarification, the PubChem324k test dataset is composed of SMILES-text description pairs. MQ-Former generates universal molecular tokens (molecule SMILES \\u2192 2D/3D pretrained embeddings \\u2192 universal molecular token) to provide as input for generating molecular captions via LlaMA2. The generated captions are compared with ground truth description labels using the metrics BLEU, METEOR and ROUGE.\\n\\n**What is the LLaMA version you use? Add adopted HuggingFace checkpoints.**\\n\\nWe used the following version of LLaMA2 in huggingface, compatible with Transformers. baffo32/decapoda-research-llama-7B-hf (https://huggingface.co/baffo32/decapoda-research-llama-7B-hf). We have clarified this in our manuscript Appendix A.3. as well. Thank you.\\n\\n**Typos and missing references** \\n\\nThank you for pointing out the typos and missing references. We appreciate your attention to detail, and have carefully revised the manuscript to correct the typos and ensure that all references are properly cited.\\n\\nReferences \\n> [16] Radford et al., \\\"Learning Transferable Visual Models From Natural Language Supervision\\\"\"}", "{\"summary\": \"The work proposes a novel multimodal LLM framework MV-CLAM for organic chemistry and MQ-Former \\u2014 multi-querying transformer model for simultaneous 1D, 2D, and 3D molecular representation learning. Authors show SOTA results in two tasks of molecule-text retrieval and molecule captioning. In addition, authors claim that their approach allows zero-shot molecule editing and molecule-related question answering.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"New molecular multimodal LLM framework for simultaneous incorporation of 1d 2D and 3D representations.\\nNew Transformer architecture MQ-Former.\", \"weaknesses\": \"The claim of the state-of-the-art performance for molecule captioning is not satisfied, see the results in [6].\\nThere is no comparison with the other strong retrieval methods for the molecule retrieval task, i.e. RAG.\\nThere are various problems with the Zero-shot editing part of the paper. The task is not formally defined. There are no metrics nor baselines for it.\\n\\nThe QA part is practically absent in the paper, while claimed in the abstract and results parts..\\nThere are many works on molecular conformation generation [1-4], it seems that SMILES and/or 2D-graph representation is enough for neural networks to reconstruct RDKIT conformations almost perfectly. It means that 3D input possibly does not add any new information to the model. There is no comparison of the 1D+2D+3D MQ-Former vs 1D+2D models in the paper.\\n\\nThere is no comparison with other works on multi-modal representation learning for molecules, e.g.: [5]. \\n\\n[1] Zhu, Jinhua, et al. \\\"Direct Molecular Conformation Generation.\\\"\\n[2] Xu, Minkai, et al. \\\"GeoDiff: A Geometric Diffusion Model for Molecular Conformation Generation.\\\" International Conference on Learning Representations.\\n[3] Jing, Bowen, et al. \\\"Torsional diffusion for molecular conformer generation.\\\" Advances in Neural Information Processing Systems 35 (2022): 24240-24253.\\n[4] Lee, Danyeong, et al. \\\"Disco: Diffusion Schr\\u00f6dinger bridge for molecular conformer optimization.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 12. 2024.\\n[5] Manolache, Andrei, Dragos Tantaru, and Mathias Niepert. \\\"MolMix: A Simple Yet Effective Baseline for Multimodal Molecular Representation Learning.\\\" arXiv preprint arXiv:2410.07981 (2024).\\n[6] Liu, Zhiyuan, et al. \\\"ReactXT: Understanding Molecular\\\" Reaction-ship\\\" via Reaction-Contextualized Molecule-Text Pretraining.\\\" arXiv preprint arXiv:2405.14225 (2024).\", \"questions\": \"1. 3D structures (conformers)\\n\\nAs mentioned in sec. 5.1 you use MMFF for molecular conformation generation.\\n\\na. Is it ETKDG geometry generation with further MMFF optimization?\\nb. Since it is possible to generate several different conformers for a single molecular structure, did you assess the dependence of the model quality on the conformations? Is it necessary to optimize a generated with ETKDG conformer with MMFF?\\n\\n2. It would be reasonable to compare your approach for Zero-shot editing with conditional generation models for small molecules.\\n\\n3. Please, add experiments on the CHEBI-20 benchmark for the captioning task.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer R1Xc's comments - 1\", \"comment\": \"**Substantial support on novelty and technical contributions**\\n> \\u201cHow does MQ-Former handle scenarios where 2D and 3D molecular information may not equally contribute to textual descriptions?\\u201d\\n\\nThank you for this insightful question, which directly aligns with the motivation behind our research. We believe that 2D and 3D molecular representations capture distinct and complementary aspects of molecular information, particularly in their connection to textual descriptions. For example, 2D information encodes connectivity and topological features, while 3D information represents spatial and geometric properties that are critical for describing stereochemistry or binding conformations.\\u00a0As demonstrated in Appendix Figure 5-6, 2D and 3D query tokens obtained from our MQ-Former architecture attend to different words in the textual descriptions. This highlights that each modality contributes uniquely to understanding the molecular-text relationship.\\n\\nTo address this, our renovated MQ-Former architecture aligns 2D and 3D multi-view embeddings simultaneously to the text. This simultaneous alignment ensures that both modalities are leveraged in a complementary manner, allowing the model to capture richer molecular semantics and better connect them with textual descriptions. We further validate the novelty and technical constructions of our approach using embedding space visualization and additional ablation study. Please kindly refer to \\u201cConcerning novelty and technical contributions\\u201d in General Response for details.\\n\\u00a0\\n\\n**Construct more downstream tasks**\\n\\n**1. QA results unavailable in main text**: Thank you for your valuable comment. We have evaluated the QA experiment results by comparing our framework, MV-CLAM\\u2014integrating the Multi-querying Transformer module with both 3D and 2D molecular encoders\\u2014against frameworks that use a single molecular encoder. Specifically, we compared it to:\\n\\n- 3D-MoLM: Q-Former with a 3D molecular encoder\\n- 2D-MoLM: Q-Former with a 2D molecular encoder\\n\\nFor each case, including our framework, we reproduced QA results for both non-3D properties (Molecular Weight, LogP, Complexity, and Topological Polar Surface Area) and 3D properties (HOMO, LUMO, HOMO-LUMO gap, and SCF energy), as shown in Table R5 and R6.\\u00a0\\n\\n**Table R5.** Comparison of QA results for non-3D molecular property prediction. The values in parentheses indicate the validity score. \\n| Model | Molecular Weight | LogP | Complexity | Topological Polar Surface Area |\\n|-------------------|-----------------------|------------|------------------|-------------------------------------|\\n| 2D-MoLM | 47.51 (0.98) | 0.89 (0.99)| 110.78 (0.99) | 16.65 (0.99) |\\n| 3D-MoLM | 42.76 (0.96) | 1.25 (0.96)| 105.03 (0.96) | 20.97 (0.92) |\\n| MQ-Former (Ours) | **21.35 (0.92)** | **0.69 (0.94)** | **55.14 (0.91)** | **9.65 (0.91)** |\\n\\n\\n**Table R6.** Comparison of QA results for 3D molecular property prediction. The values in parentheses indicate the validity score.\\n| Model | HOMO | LUMO | HOMO-LUMO | SCF Energy |\\n|-------------------|------------|------------|----------------|-----------------|\\n| 2D-MoLM | 0.78 (0.99)| 0.47 (0.99)| 0.39 (0.90) | 0.98 (1.00) |\\n| 3D-MoLM | 0.42 (0.99)| 0.44 (0.98)| 1.26 (0.99) | 1.22 (0.98) |\\n| MQ-Former (Ours) | **0.35 (0.98)** | **0.42 (0.93)** | **0.35 (0.99)** | **0.32 (0.99)** |\\n\\n\\nBold text indicates the best performance. As shown in the tables, our framework with MQ-Former achieved the highest scores in both non-3D and 3D molecular property QA tasks by effectively utilizing molecular information from both dimensions. We have now properly updated the result tables as Table 5 in the revised manuscript.\"}", "{\"title\": \"Response to Reviewer vfcG's comments\", \"comment\": \"**Substantial support on novelty and technical contributions**\\n: Please kindly refer to \\\"*General Response (3). Concerning novelty and technical contributions*\\u201d. \\n\\n**Alternative Usage of SELFIES instead of SMILES**\\n: Following your recommendation, we conducted additional experiments to compare the performance of SELFIES and SMILES representations within our framework. By replacing SMILES with SELFIES, we trained our model for molecule captioning using identical training hyperparameters on the PubChem324k training dataset. The results demonstrated that models using SMILES outperformed those with SELFIES in terms of BLEU, METEOR, and ROUGE scores. Nonetheless, we appreciate your suggestion and recognize SELFIES as a promising alternative for certain use cases. The results are also available in Appendix A.7.\\n\\n**Table R7**. Comparison of SELFIES and SMILES as 1D representations\\n\\n| | BLEU2 | BLEU4 | METEOR | ROUGE1 | ROUGE2 | ROUGE-L |\\n|------------|-------|-------|--------|--------|--------|---------|\\n| **SELFIES** | 28.39 | 20.89 | 33.25 | 37.58 | 22.49 | 31.37 |\\n| **SMILES** | **31.75** | **24.48** | **36.54** | **40.43** | **25.72** | **33.79** |\\n\\n\\n**Clarify images**\\nThank you for pointing out the issues with the images, particularly the one on page 18. We have revised the manuscript to address these concerns. The non-vector graphics have been replaced with vector graphics for improved clarity and quality. Additionally, we have added appropriate titles and captions to all images, including the one on page 18, to enhance their interpretability and provide necessary context.\"}", "{\"title\": \"General Response (1). Concerning SOTA baseline models - 1\", \"comment\": \"We sincerely appreciate the constructive feedback regarding the comparison with state-of-the-art (SOTA) models. **We chose 3D-MoLM [1] and UniMoT [2] as our baseline models because they are the most closely aligned with our model in terms of structural similarity and training data, allowing for the most equitable evaluation regarding our novel cross-modal projector, MQ-Former.** We appreciate the suggestion to implement additional SOTA methods and have carefully considered their inclusion.\\n\\nWhile additional comparisons with other SOTA methods remain important, there are differences in dataset composition and preprocessing pipelines across suggested models (Table R1). Even within the PubChem dataset, prior models adopted different preprocessing procedures, leading to variations in the number of molecule entities. As a result, previous works also have demonstrated inconsistencies in the selection of baseline models. **To ensure a fair comparison, we aligned our selection with UniMoT, the most recent model trained on the same dataset.** We updated Table 1 in the revised manuscript - presented as Table R2 - to include MoleculeSTM [3] and MolCA [4]. Similarly, Table 2 in the revised manuscript - shown as Table R3 - was updated to include InstructMol [5] and MolCA as baseline models, following the UniMoT paper.\\n\\nWe recognize the value of retraining other models under the same dataset condition for a rigorous comparison. However, training a single model and performing additional fine-tuning would exceed the available timeline. Therefore, we concentrated our efforts on the most feasible and impactful evaluations within the given discussion timeframe. We prioritized validating the key contributions and novelty of our proposed MQ-Former model through ablation studies and targeted experiments that justify its design and effectiveness. These experiments highlight the distinct advantages of our approach in leveraging 2D and 3D molecular representations to align with textual information as shown in Tables R4-R6, and the updated Appendix Figures 7 and 13 in the revised manuscript.\\n\\n\\n\\n**Table R1.** Pretraining datasets of baseline models\\n| Model | Dataset - #pretrain |\\n|--------------|---------------------------------------------|\\n| MolCA | PubChem-MolCA-298k |\\n| 3DMoLM | PubChem-3DMoLM-301k |\\n| UniMoT | PubChem-3DMoLM-301k |\\n| MolBind [6] | PubChem-MolBind-319k |\\n| TextChem T5 [7] | CheBI-Train |\\n| GitMol [8] | PubChem+CheBI-GitMol-320k \\u2192 90k (multimodal)|\\n| Atomas [9] | PubChem-Atomas-243k \\u2192 51k (high-quality, leak-free) |\\n| Presto [10] | PubChem-Presto-326k |\\n| AMORE [11] | CheBI-Train |\\n| SciFive [12] | CheBI-Train |\\n| nacho [13] | Mol-Instructions |\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Dear authors,\\n\\nI have read all your clarifications carefully and decided to keep my initial assessment.\\n\\nSincerely,\\nreviewer\"}", "{\"title\": \"General Response (1). Concerning SOTA baseline models - 2\", \"comment\": \"We provide updated table for molecule-text retrieval and molecule captioning.\\n\\n**Table R2.** Updated molecule-text retrieval performance.\\n| Model | Retrieval in Batch (M2T) | Retrieval in Batch (T2M) | Retrieval in Test Set (M2T) | Retrieval in Test Set (T2M) |\\n|-------------------|--------------------------|---------------------------|-----------------------------|-----------------------------|\\n| | ACC | R@20 | ACC | R@20 | ACC | R@20 | ACC | R@20 |\\n| **1D SMILES** | | | | | | | | |\\n| Sci-BERT | 85.32 | 98.74 | 84.20 | 98.43 | 41.67 | 87.31 | 40.18 | 86.77 |\\n| KV-PLM | 86.05 | 98.63 | 85.21 | 98.47 | 42.80 | 88.46 | 41.67 | 87.80 |\\n| **2D Graph** | | | | | | | | |\\n| MoMu-S | 87.58 | 99.24 | 86.44 | 99.38 | 47.29 | 90.77 | 48.13 | 89.92 |\\n| MoMu-K | 88.23 | 99.41 | 87.29 | 99.42 | 48.47 | 91.64 | 49.46 | 90.73 |\\n| MoleculeSTM* | 90.50 | 99.60 | 88.60 | 99.50 | 52.70 | 92.90 | 53.20 | 92.50 |\\n| MolCA* | 92.60 | 99.80 | 91.30 | 99.50 | 67.90 | 94.40 | 68.60 | 93.30 |\\n| **2D Graph + Tokenizer** | | | | | | | | |\\n| UniMoT | _93.60_ | **100.0** | 92.70 | 99.40 | _69.50_ | _96.30_ | 69.80 | 94.40 |\\n| **3D Conformer** | | | | | | | | |\\n| 3D-MoLM | 93.50 | **100.0** | _92.89_ | _99.59_ | 69.05 | 95.91 | _70.13_ | _94.88_ |\\n| **2D Graph + 3D Conformer** | | | | | | | |\\n| MV-CLAM | **96.57**| _99.95_ | **97.03**| **99.95** | **76.32**| **96.57** | **77.03**| **96.42** |\\n\\n\\n**Table R3.** Updated molecule captioning performance.\\n| Model | BLEU-2 | BLEU-4 | ROUGE-1 | ROUGE-2 | ROUGE-L | METEOR |\\n|--------------------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| **1D SMILES** | | | | | | |\\n| MolT5-Small | 22.53 | 15.23 | 30.44 | 13.45 | 20.30 | 23.98 |\\n| MolT5-Base | 24.51 | 16.61 | 32.19 | 14.04 | 21.35 | 26.10 |\\n| MolT5-Large | 25.87 | 17.28 | 34.07 | 16.42 | 23.41 | 28.04 |\\n| Llama2-7B\\u2020 | 27.01 | 20.94 | 35.76 | 20.68 | 28.88 | 32.11 |\\n| **2D Graph** | | | | | | |\\n| MoMu-Small | 22.86 | 16.01 | 30.98 | 13.65 | 20.75 | 24.35 |\\n| MoMu-Base | 24.74 | 16.77 | 32.45 | 14.62 | 22.09 | 27.16 |\\n| MoMu-Large | 26.34 | 18.01 | 34.75 | 16.86 | 24.76 | 28.73 |\\n| 2D-MoLM\\u2020 | 27.15 | 21.19 | 36.02 | 20.76 | 29.12 | 32.28 |\\n| InstructMol* | 18.90 | 11.70 | 27.30 | 11.80 | 17.80 | 21.30 |\\n| MolCA-Small* | 25.90 | 17.50 | 34.40 | 16.60 | 23.90 | 28.50 |\\n| MolCA-Large* | 28.60 | 21.30 | 36.20 | 21.40 | 29.70 | 32.60 |\\n| **2D Graph + Tokenizer** | | | | | | |\\n| UniMoT | _31.30_ | _23.80_ | _37.50_ | _23.70_ | _33.60_ | _34.80_ |\\n| **3D Conformer** | | | | | | |\\n| 3D-MoLM | 30.32 | 22.52 | 36.84 | 22.32 | 31.23 | 33.06 |\\n| **2D Graph + 3D Conformer** | | | | | | |\\n| MV-CLAM | **31.75** | **24.48** | **40.43** | **25.72** | **33.79** | **36.54** |\"}", "{\"title\": \"Response to Reviewer 48Rx's comments - 2\", \"comment\": \"**Justification for implementation decisions**\\n\\nThank you for your insightful comments. We appreciate the opportunity to clarify the implementation decisions and their rationale. Below, we address each of the points raised and provide additional justifications:\\n\\n1. **Choice of SciBERT as the Language Encoder for MQ-Former**\\n> The MQ-Former module is initialized using SciBERT checkpoints because SciBERT has been pre trained on scientific texts, including chemical literature, and is well-suited for extracting textual features from domain-specific datasets. This aligns with the initialization strategy employed by related works such as MolCA and 3d-MoLM. For molecule captioning, the text embeddings are subsequently encoded and decoded using the LLaMA2 tokenizer, enabling us to combine SciBERT\\u2019s scientific domain expertise with LLaMA2\\u2019s strong language modeling capabilities. Our intention was to keep the encoders consistent with prior works to establish a fair baseline while focusing on evaluating the cross-modal projection capabilities of MQ-Former.\\n> To improve clarity in the manuscript, we have rephrased sections to better explain this dual-encoder strategy. Specifically, the \\\"text encoder\\\" subsection has been removed, and the details have been integrated into Section 3.2 (MQ-Former) and Appendix A.3 (Experimental Settings) to provide a more cohesive explanation.\\n\\n2. **Choice of 2D and 3D Encoders**\\n> We adopted state-of-the-art graph-based molecular encoders for 2D (Molecular Attention Transformer) and 3D (Uni-Mol). These choices are motivated by their proven effectiveness in molecular representation learning and their alignment with the architecture of 3D-MoLM. While we agree that exploring alternative 2D and 3D encoders is valuable, our primary goal in this study was to demonstrate MQ-Former\\u2019s cross-modal projection advancements when aligning 2D embeddings simultaneously with 3D embeddings. Future work could replace these encoders with alternatives to further generalize the approach.\\n\\n3. **Introduction of Query Tokens Instead of a Single Query Token**\\n> The decision to utilize multiple learnable query tokens in our MQ-Former architecture is inspired by the original work in BLIP-2, which also employs multiple learnable query tokens for cross-modal alignment. This approach has been shown to enhance the model's capacity to attend to diverse aspects of visual or structural representations and align them effectively with textual descriptions. Previous works (MolCA, 3D-MoLM) also use multiple tokens instead of one.\\n> We also conducted a preliminary ablation study comparing the use of a single query token versus multiple query tokens affecting molecule-text retrieval performance on the pretraining dataset (Table R8). These evaluations were conducted at epoch 10. We also showcase an attention map (Appendix Figure 13) to show multiple query tokens allow the model to capture distinct attention patterns in textual descriptions. This decision aligns with the design philosophy of BLIP-2 [15] and ensures that MQ-Former is capable of leveraging the unique information provided by each modality for more comprehensive molecule captioning. The results have been organized into Appendix A6 (Number of Query Tokens).\\n\\n**Table R8**. Ablation study: Number of query tokens\\n\\n| Inbatch | M2T | | T2M | | Full | M2T | | T2M | |\\n|---------|-----------|---------|-----------|---------|------|-----------|---------|-----------|---------|\\n| | ACC | R@20 | ACC | R@20 | ACC | R@20 | ACC | R@20 | |\\n| 1 | 96.16 | 99.85 | 95.40 | **99.85** | 70.08| 96.42 | 70.97 | 95.50 | |\\n| 12 | **96.73** | **99.90** | **96.01** | **99.85** | **70.90** | **96.98** | **71.15** | **95.96** | |\\n\\n4. **Choice of LLaMA2 as the Base Language Model**\\n> We chose LLaMA2 as the base large language model due to its strong performance in molecule-text modeling, as demonstrated in related works like 3D-MoLM and UniMoT. By utilizing LoRA, the actual parameters used for training is only 0.29% of the total parameters in the LLaMA2-7B, comparable even to the size of small MolT5-1B. While MQ-Former is model-agnostic and can work with other architectures (e.g., T5-based models), we prioritized *consistency with prior studies (3D-MoLM, UniMoT)* for fair comparisons, emphasizing the performance of our novel cross-modal projector, MQ-Former.\\n\\nReferences \\n> [15] Li et al., \\\"BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models\\\"\"}", "{\"summary\": \"The paper introduces a framework that leverages large language models (LLMs) to interpret and generate molecular captions. The work incorporates both 2D and 3D molecular structures to provide a more comprehensive understanding of molecules.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper integrates both 2D and 3D molecular structures to enhance the model's understanding of molecular data.\\n2. The paper includes detailed figures (Figure 1-3) that clearly explain the method's framework and training scheme. \\n3. And the analysis of attention maps in Appendix A.4 provides valuable insights into the model's behavior.\", \"weaknesses\": \"1. Compared to recent related work, such as 3D-MoLM (Li et al., 2024), the innovation in MV-CLAM appears incremental. While the paper claims to incorporate both 2D and 3D molecular structures for a more comprehensive understanding, the approach seems to merely extend the 3D-MoLM framework by introducing 2D components through MAT. The proposed MQ-former architecture does not demonstrate significant structural innovations beyond existing methods. A clearer articulation of the novel contributions and architectural advantages over 3D-MoLM would be necessary to establish the work's originality.\\n2. The paper considers SMILES as an important molecular modality and notes that \\\"1D SMILES provide compact represen tation of molecular structures\\\", but does not mention SELFIES (Krenn et al., 2020) at all, which has been widely adopted in recent works due to its robust characteristics and tokenization-friendly nature. SELFIES offers inherent robustness and easier tokenization that aligns well with LLMs, making it a potentially more suitable choice for this application. \\n3. Some images (e.g. the big image at page 18) are not vector graphics and lack titles or captions, which makes it confusing.\", \"questions\": \"See 'Weaknesses' section.\\n1. Could the authors provide a more detailed explanation of the novelty of MV-CLAM compared to recent related work?\\n2. Why was SELFIES not considered as a molecular modality in this work, given its advantages over SMILES in tokenization and alignment with LLMs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 48Rx's comments - 1\", \"comment\": \"**Baseline models and datasets**: Please kindly refer to *General Response \\u201c (1). Concerning SOTA baseline models\\u201d and \\u201c(2). Concerning the CheBI20 dataset*\\u201d.\\n\\n**Adding additional baseline models (ChemBERTa, MolT5) for the retrieval task (Table 1)**: \\n\\nThank you for the insightful comment. ChemBERTa, as a chemical BERT-based encoder, could be considered as an alternative to SciBERT. However, unlike SciBERT, ChemBERTa is pretrained specifically on SMILES strings derived from chemical databases like PubChem and ChEMBL, excelling in tokenizing SMILES using a Byte-Pair Encoding (BPE) tokenizer. Given the MQ-Former architecture, which takes both molecular graphical structures and textual descriptions as inputs, we utilized the architecture and pretrained weights of SciBERT, pretrained on a large corpus of scientific texts. Additionally, we adhered to approaches from *prior molecule-captioning research using Q-Former, such as 3D-MoLM and UnimoT, which demonstrated the effectiveness of SciBERT in similar tasks*.\\n\\nMolT5, based on the T5 (Text-to-Text Transfer Transformer) architecture, is optimized for sequence generation, making it less efficient for molecule-text retrieval tasks compared to bidirectional encoders like SciBERT, which are better suited for retrieval. Its training scheme also differs from ours and the selected baseline models. Consequently, *prior works (3D-MoLM, UnimoT, MolCA) have not included MolT5 in molecule-text retrieval comparisons*, focusing instead on molecule-captioning tasks, as we do.\"}", "{\"title\": \"Response to Reviewer 6JAy's comments 2 - (2/3)\", \"comment\": \"> 2. The same questions for 3D-MoLM models in the Q&A task. The metrics in the original paper are significantly better.\\n\\nWe reproduced the Q&A task of 3D-MoLM because, while conducting the Q&A experiments using the code provided by 3D-MoLM, we found that the method used to extract properties (e.g., \\u201cThe Molecular Weight for the input molecule is 123.18 g/mol\\u201d) at the final stage **significantly** affected performance. Specifically, unless we manually extracted the properties one by one, the automated extraction process could lead to errors. However, 3D-MoLM did not provide a clear method for extracting these properties.\\n\\nTo ensure a fair comparison, we standardized the property extraction process across all models and conducted the performance evaluation using this consistent approach. This allowed us to remove inconsistencies caused by different extraction methods and ensure a more accurate comparison of model performance.\\n\\nAlthough our paper utilizes reproduced results for the above reasons, under the same circumstance using the original (without GPT3.5-enrichment) PubChem324k dataset, our model obtains superior performance compared to 3D-MoLM and 2D-MoLM in the original paper. We provide the comparison of reported official performance for 3D-MoLM, 2D-MoLM and Llama2-7B with ours in TableR10, R11. \\n\\n**Table R10.** Comparison of QA results for non-3D molecular property prediction. We report the performance as given in the original paper. We report 3D-MoLM results trained on the original (without GPT3.5 enrichment) dataset. \\n| Model | Molecular Weight | LogP | Complexity | Topological Polar Surface Area |\\n|-------------------|-----------------------|------------------|------------------|-------------------------------------|\\n| Llama2-7B | 22.10 (0.96) | 1.45 (0.95)| 69.74 (0.93) | 15.87(0.92) |\\n| 2D-MoLM | 21.48 (0.94) | 0.88 (0.96)| 55.74 (0.94) | 13.52 (0.92) |\\n| 3D-MoLM (Generalist) | 19.54 (0.93) | 0.92 (0.92) | 54.68 (0.90) | 11.14 (0.92) |\\n| 3D-MoLM (Specialist) | **16.18 (0.96)** | 0.95 (0.96)| **49.15 (0.95)** | 10.26 (0.94) |\\n| MV-CLAM (Ours) | 21.35 (0.92) | **0.69 (0.94)** | 55.14 (0.91) | **9.65 (0.91)** |\\n\\n**Table R11.** Comparison of QA results for 3D molecular property prediction. We report the performance as given in the original paper. We report 3D-MoLM results trained on the original (without GPT3.5 enrichment) dataset. \\n| Model | HOMO | LUMO | HOMO-LUMO | SCF Energy |\\n|-------------------|------------|------------|----------------|-----------------|\\n| Llama2-7B | 1.24 (0.96)| 1.04 (0.95)| 0.88 (0.92) | 0.70 (0.99) |\\n| 2D-MoLM | 0.92 (0.98)| 0.80 (0.96)| 0.67 (0.93) | 0.71 (0.99) |\\n| 3D-MoLM (Generalist) | 0.65 (0.94)| 0.41 (0.92)| 0.55 (0.89) | 0.49 (0.99) |\\n| 3D-MoLM (Specialist) | 0.45 (0.98)| **0.36 (0.96)**| 0.41 (0.94) | 0.39 (0.99) |\\n| M\\bV-CLAM (Ours) | **0.35 (0.98)** | 0.42 (0.93) | **0.35 (0.99)** | **0.32 (0.99)** |\\n\\n\\nNote that we chose to use the original dataset without GPT-enriched descriptions, aligning with observations noted in the 3D-MoLM paper: \\n\\n> \\\"The retrieval performance on the PubChem test set appears to be negatively impacted by GPT-3.5 enrichment. We infer that this decline is caused by the enrichment process enlarging the distribution gap between the pretraining and downstream tasks.\\\"\\n\\nSince our study emphasizes cross-modal alignment over captioning, we concluded that GPT-enriched descriptions might do more harm than good for downstream retrieval performance. This choice ensures our results remain aligned with the study\\u2019s objectives and avoid negative impacts on cross-modal tasks.\"}" ] }
063FuFYQQd
LLaVA-Surg: Towards Multimodal Surgical Assistant via Structured Lecture Learning
[ "Jiajie Li", "Garrett Skinner", "Brian R Quaranto", "Gene Yang", "Steven D Schwaitzberg", "Peter C W Kim", "Jinjun Xiong" ]
Multimodal large language models (LLMs) have achieved notable success across various domains, while research in the medical field has largely focused on unimodal images. Meanwhile, current general-domain multimodal models for videos still lack the capabilities to understand and engage in conversations about surgical videos. One major contributing factor is the absence of datasets in the surgical field. In this paper, we create a new dataset, Surg-QA, consisting of 102,000 surgical video-instruction pairs, the largest of its kind so far. To build such a dataset, we propose a novel two-stage question-answer generation pipeline with LLM to learn surgical knowledge in a structured manner from the publicly available surgical lecture videos. The pipeline breaks down the generation process into two stages to significantly reduce the task complexity, allowing us to use a more affordable, locally deployed open-source LLM than the premium paid LLM services. It also mitigates the risk of LLM hallucinations during question-answer generation, thereby enhancing the overall quality of the generated data. We further train LLaVA-Surg, a novel vision-language conversational assistant capable of answering open-ended questions about surgical videos, on this Surg-QA dataset, and conduct comprehensive evaluations on zero-shot surgical video question-answering tasks. We show that LLaVA-Surg significantly outperforms all previous general-domain models, demonstrating exceptional multimodal conversational skills in answering open-ended questions about surgical videos. We will release our code, model, and the instruction-tuning dataset.
[ "Multimodal assistant", "surgical", "multimodal instruction-following data", "dataset" ]
Reject
https://openreview.net/pdf?id=063FuFYQQd
https://openreview.net/forum?id=063FuFYQQd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "dkbJzxbH3G", "XGYQVlsjFk", "SnuEs3dhxU", "NyI6iTvIWO", "MxvlHdZ7WE", "CjsZ4Ai0js", "5n7k3cYPen" ], "note_type": [ "official_review", "official_review", "meta_review", "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1730499784644, 1730731838359, 1734299013452, 1730695599596, 1730361702171, 1730572638969, 1737523668668 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4896/Reviewer_QjG9" ], [ "ICLR.cc/2025/Conference/Submission4896/Reviewer_A8vi" ], [ "ICLR.cc/2025/Conference/Submission4896/Area_Chair_VxPb" ], [ "ICLR.cc/2025/Conference/Submission4896/Reviewer_xkns" ], [ "ICLR.cc/2025/Conference/Submission4896/Reviewer_zRU3" ], [ "ICLR.cc/2025/Conference/Submission4896/Reviewer_6mWr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces a novel surgical multimodal dataset, which consists of over 102,000 video-instruction pairs generated through a two-stage pipeline, aimed at enhancing the understanding and conversational capabilities of surgical videos.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. With over 102,000 video-instruction pairs, this dataset is the largest in the surgical field.\\n2. Structured data annotation pipeline using LLMs minimizes the risk of generating inaccurate or nonsensical content, improving dataset reliability.\\n3. Releasing the dataset, model, and code publicly fosters further research and development in the surgical AI domain.\\n4. The dataset can be a valuable resource for training and education, helping surgical trainees learn through interactive Q&A about real procedures.\", \"weaknesses\": \"1. The paper does not address how the data's quality is maintained as the videos are obtained from the web. The clinicians have reviewed the output of their MLLM model, but the paper does not confirm whether clinicians or domain experts have reviewed the raw data to ensure accuracy and reliability.\\n2. Concerns regarding the release, privacy, and permission risks associated with using sensitive surgical videos are not adequately discussed.\\n3. The paper lacks comprehensive validation across essential surgical downstream tasks and other surgical QA datasets, which are crucial for demonstrating clinical usability. There is also a need for more rigorous benchmarking against a broader range of state-of-the-art video MLLM architectures to establish the dataset's utility and the model's performance more robustly.\\n4. The comparison of the proposed methods with SOTA methods is limited and does not include the latest works. The manuscript also lacks evaluations with models trained on other surgical datasets, limiting the assessment of the proposed model's generalizability across different surgical scenarios.\\n5. The paper may need to evaluate the visual quality of the surgical videos.\", \"questions\": \"1. How can the quality of the data be ensured? The data collected may already contain a lot of noise and has been reprocessed by an LLM. Is there any person or clinician reviewing these raw data?\\n2. Can the data be released? Are there privacy and permission risks associated with the collected data?\\n3. The authors need to conduct more zero-shot evaluations on downstream tasks relevant to the surgical field, such as phase recognition, action/instrument classification, and other surgical domain VQA data to demonstrate the clinical usability of their method.\\n4. The authors need to compare with more state-of-the-art methods. The comparison methods in Table 3 were all first released in 2023.\\n5. The authors may verify their dataset on more benchmarks of SOTA Video MLLM architectures.\\n6. Also, the authors need more zero-shot comparisons with the same VLM trained on other surgical datasets, to showcase the generalizability of their proposed dataset.\\n7. The authors may evaluate the visual quality of the surgical videos themselves, as they are obtained from the website.\", \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']\", \"details_of_ethics_concerns\": \"Potential copyright problem for online data.\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces LLaVA-Surg, a multimodal large language model designed as a conversational assistant for surgical applications. To support this, the authors developed Surg-QA, a large-scale dataset containing 102,000 surgical video-instruction pairs, generated through a structured two-stage question-answer pipeline. This pipeline helps extract structured knowledge from surgical lecture videos, enabling the LLaVA-Surg model to understand complex surgical procedures and answer open-ended questions in a zero-shot setting. The model leverages CLIP for visual encoding and is fine-tuned on Surg-QA to specialize in surgical video question-answering, achieving superior performance compared to existing general-domain models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe authors provide a novel dataset, Surg-QA, which is a significant resource for training multimodal surgical models, covering diverse surgical procedures and question-answer pairs.\\n2.\\tThe two-stage pipeline for question-answer generation mitigates hallucinations in LLM outputs, resulting in higher quality and reliability of generated data.\\n3.\\tLLaVA-Surg demonstrates notable improvements over general multimodal models in zero-shot surgical video question-answering tasks, showcasing its efficacy in understanding surgical context.\", \"weaknesses\": \"1. The paper should compare its model with recent multimodal LLM approaches, specifically ReAct (Yao et al., 2023), which combines reasoning and action for complex tasks.\\n[1] Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2023, January). ReAct: Synergizing Reasoning and Acting in Language Models. In International Conference on Learning Representations (ICLR).\\n2. Using CLIP for frame-by-frame encoding lacks temporal modeling and increases processing costs and redundancy, burdening the LLM as frame count grows.\\n3. The paper lacks an in-depth error analysis, especially regarding potential hallucinations or misunderstandings in complex surgical scenarios. Although the authors claim to reduce hallucinations, achieving perfect performance seems challenging.\\n4. The model\\u2019s adaptability to other medical or clinical fields is unclear, as broader evaluations on datasets like RAD, SLAKE, and PathVQA are missing, which may limit its wider applicability.\", \"questions\": \"1. Does splitting video into frames for CLIP\\u2019s visual encoder lead to a loss of spatiotemporal information, and wouldn\\u2019t a video encoder like Video Swin Transformer [2] better capture temporal dynamics?\\n[2] Liu, Z., Ning, J., Cao, Y., Wei, Y., Zhang, Z., Lin, S., & Hu, H. (2022). Video swin transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3202-3211).\\n2. How does LLaVA-Surg perform compared to other state-of-the-art multimodal methods? In addition to general multimodal models, a detailed comparison with models like ReAct would provide a more comprehensive evaluation. Has comparison with other two-stage methods [3] in VQA task been overlooked?\\n[3] Gai, X., Zhou, C., Liu, J., Feng, Y., Wu, J., & Liu, Z. (2024). MedThink: Explaining Medical Visual Question Answering via Multimodal Decision-Making Rationale. arXiv preprint arXiv:2404.12372.\\n3. Is the two-stage question-answer generation process applicable to other medical fields, and if so, what adjustments would be required? Additionally, validating the method\\u2019s performance on public datasets like RAD, SLAKE, and PathVQA would strengthen its generalizability.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces Surg-QA, a large surgical video-instruction dataset, and LLaVA-Surg, a vision-language assistant for surgical video Q&A. The paper is well-written and presented clearly. It contributes a novel dataset, and strong results are achieved. However, key shortcomings of the paper include a lack of in-depth comparisons with state-of-the-art, concerns regarding the data quality, room for better temporal modelling, and reproducibility issues.\", \"additional_comments_on_reviewer_discussion\": \"No rebuttal was provided and the reviewers kept or decreases their original scores.\"}", "{\"summary\": \"The paper introduces LLaVA-Surg, a multimodal conversational assistant based on surgical videos. Additionally, they introduce a new dataset with 102,000 question-answer pairs for training multimodal LLMs. The authors provide details of their data generation procedure, which is carefully designed to avoid hallucinations. The paper provides detailed comparisons with existing general-purpose and surgical-purpose datasets. Lastly, the authors provide a human and LLM evaluation of the dataset, showing consistent scores.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Clarity**: The paper is well-written and easy to follow.\", \"**Contributions**: This work makes a significant contribution to the development of surgical chat assistants. The dataset contains a wider range of surgical QAs compared to previous works. The proposed model and dataset may be valuable resources for researchers in this area.\"], \"weaknesses\": [\"**Dataset Availability**: The surgical videos are available on WebSurg and are not a contribution of the authors. Therefore, the data availability may be subject to license changes from the content owners and WebSurg.\", \"**Hallucinations and Data Quality**: As the authors mentioned, there may be hallucinations in the dataset, since it is automatically generated. The authors provide chatGPT and human evaluations, but that is not enough to infer the data quality.\", \"**Model Availability**: It is not possible to reproduce the results since the model is not available yet, but enough details are provided to support the paper.\"], \"questions\": \"The paper is very well written and addresses its objectives. It also supports its claims and provides adequate experiments. Therefore, I am leaning toward accepting this paper, but I have some minor concerns regarding the legality of using WebSurg's surgical videos. I also have some questions:\\n1. The authors mention that the model is limited by hallucinations, which is a serious concern for a surgical chatbot. Could you please provide more details, and types of hallucinations, and give some examples?\\n2. Would it be possible to evaluate LLaVA-Surg on the SSG-VQA dataset? I am interested in knowing more about the breadth of your dataset and if it contains enough information for cross-dataset generalization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"I am wondering about the WebSurg's policies on using their videos to train deep learning models, but I could not find any information about this in their terms of use.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces LLaVA-Surg, the first multimodal surgical assistant capable of understanding surgical videos and engaging in open-ended conversations about them. The authors create Surg-QA, a dataset of 102,000 surgical video-instruction pairs, using a novel two-stage question-answer generation pipeline. This approach reduces LLM hallucinations and costs by breaking down the generation process. The resulting model demonstrates superior performance in surgical video question-answering compared to previous general-domain models.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The pipeline is comprehensive: A two-stage question-answer generation process minimizes hallucinations by extracting information prior to generating pairs, which enhances data quality and reliability compared to Quilt-1M[1], which has a similar approach.\\n\\n2. Integrating surgical visual concept alignment data through action triplets improves text-visual alignment, enhancing the model\\u2019s grasp of surgical concepts.\\n\\n3. The idea is interesting: using the Spearman rank correlation between expert and GPT scores effectively validates the reliability of large-scale GPT evaluation.\\n\\n[1] Ikezogwo, Wisdom, et al. \\\"Quilt-1m: One million image-text pairs for histopathology.\\\" Advances in neural information processing systems 36 (2024).\", \"weaknesses\": \"1. Could you provide results for the three existing surgical-domain datasets (EndoVis-18-VQA, Cholec80-VQA, and SSG-VQA) trained on Surg-QA? These results could demonstrate Surg-QA's potential as a foundational dataset in the surgical domain.\\n\\n2. Maybe considering to use other video VLM models, which provides a more sophisticated approach to temporal fusion than simple average pooling.\", \"questions\": \"Please address the weaknesses mentioned above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a novel vision-language model, LLaVA-Surg, designed to assist in surgical settings. Leveraging the newly created Surg-QA dataset with 102K surgical video-instruction pairs, the model provides conversational, open-ended responses to questions about surgical procedures. Evaluations demonstrate LLaVA-Surg\\u2019s superior performance in surgical video question-answering, indicating its potential as a reliable tool in surgery-related applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The Surg-QA dataset, along with the two-stage pipeline, is a significant contribution to medical AI.\", \"LLaVA-Surg\\u2019s ability to process and interpret surgical video content sets it apart from other models focused primarily on static images.\", \"The language is clearly presented. The authors use precise and concise language so that the reader can easily understand the dataset, methodology, and results of the study.\"], \"weaknesses\": [\"Although the dataset is valuable, this storyline and methodology is too similar with LLaVA-Med [1]. Maybe the authors could think of improvements of this simple fine-tuning method (i.e., SFT) to make better use of this dataset.\", \"The paper lacks comparative results. The current comparative models are rarely trained on surgical scene data, which is unfair. It is necessary to compare with a specific model.\", \"Since doctors are hired to do the annotation, have the possible ethical risks been resolved? For example, IRB approval, etc.\", \"[1] Li C, Wong C, Zhang S, et al. Llava-med: Training a large language-and-vision assistant for biomedicine in one day[J]. Advances in Neural Information Processing Systems, 2023.\"], \"questions\": [\"Improvement of the methodology.\", \"Detailed Comparison.\"], \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"Since doctors are hired to do the annotation, have the possible ethical risks been resolved? For example, IRB approval, etc.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
04qx93Viwj
Holistically Evaluating the Environmental Impact of Creating Language Models
[ "Jacob Morrison", "Clara Na", "Jared Fernandez", "Tim Dettmers", "Emma Strubell", "Jesse Dodge" ]
As the performance of artificial intelligence systems has dramatically increased, so too has the environmental impact of creating these systems. While many model developers release estimates of the power consumption and carbon emissions from the final training runs for their latest models, there is comparatively little transparency into the impact of model development, hardware manufacturing, and total water usage throughout. In this work, we estimate the real-world environmental impact of developing a series of language models, ranging from 20 million to 13 billion active parameters, trained on up to 5.6 trillion tokens each. When accounting for hardware manufacturing, model development, and our final training runs, we find that our series of models released **493 metric tons** of carbon emissions, equivalent to powering about 98 homes in the United States for one year, and consumed **2.769 million liters of water**, equivalent to about 24.5 years of water usage by a person in the United States, even though our data center is extremely water-efficient. We measure and report the environmental impact of our model development; to the best of our knowledge we are the first to do so for LLMs, and we find that model development, the impact of which is generally not disclosed by most model developers, amounted to **~50%** of that of training. By looking at detailed time series data for power consumption, we also find that power usage throughout training is not consistent, fluctuating between ~15% and ~85% of our hardware's maximum power draw, with negative implications for grid-scale planning as demand continues to grow. We close with a discussion on the continued difficulty of estimating the environmental impact of AI systems, and key takeaways for model developers and the public at large.
[ "machine learning", "artificial intelligence", "language model", "large language models", "environmental impact", "carbon emissions", "water usage" ]
Accept (Spotlight)
https://openreview.net/pdf?id=04qx93Viwj
https://openreview.net/forum?id=04qx93Viwj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zWxGPT1maX", "uBLRMtd0XW", "s7eloVINbo", "oryGOMqH38", "or7EBup9QW", "jOnPB8Qy6N", "dn6o9ngTmJ", "d0Ieg2WKZM", "MWx5EIFUjs", "E7dvFdBeth", "6UzT2n5tgD", "3iX0crx1md", "2bWjdQSrSb", "2QoUR8U2o1" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment" ], "note_created": [ 1732517176696, 1734636989743, 1732517691913, 1732524441769, 1730648876057, 1732621022660, 1732664728328, 1730591500983, 1732664496883, 1732556766770, 1730518728567, 1732666666583, 1737524200687, 1732517254231 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12568/Authors" ], [ "ICLR.cc/2025/Conference/Submission12568/Area_Chair_h3Y6" ], [ "ICLR.cc/2025/Conference/Submission12568/Authors" ], [ "ICLR.cc/2025/Conference/Submission12568/Reviewer_TYDT" ], [ "ICLR.cc/2025/Conference/Submission12568/Reviewer_pAyW" ], [ "ICLR.cc/2025/Conference/Submission12568/Reviewer_pAyW" ], [ "ICLR.cc/2025/Conference/Submission12568/Authors" ], [ "ICLR.cc/2025/Conference/Submission12568/Reviewer_zfmC" ], [ "ICLR.cc/2025/Conference/Submission12568/Authors" ], [ "ICLR.cc/2025/Conference/Submission12568/Reviewer_zfmC" ], [ "ICLR.cc/2025/Conference/Submission12568/Reviewer_TYDT" ], [ "ICLR.cc/2025/Conference/Submission12568/Reviewer_TYDT" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12568/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for reviewing our paper, and for your thoughtful comments.\\n\\nRegarding reproducibility, to the best of our knowledge all of our steps are fully reproducible. We will additionally be releasing our power usage time series data in the final version of the paper, so others can analyze it as well. Other model developers are also able to replicate our calculations for their own training runs, as long as they log power data during training (as this data will be lost otherwise), and if they record where their training runs took place. That being said, we would like to emphasize that the PUE and WUE coefficients we use are specific to our own server hardware and power providers, and anyone trying to report similar numbers for their own LLM development would likely, in the present time, need to reach out individually to their data center provider, etc. This is one aspect of reporting LLM creation costs that we describe in our Discussion section as being a notable challenge in a broader quest for accurate estimation and management of resource consumption driven by LLM creation and use.\\n\\nRegarding the helpfulness of additional details in the Development section, we agree and will provide further details in the final version of the paper. We are currently withholding some specific information on our development process in order to preserve anonymity, but we will happily provide as much information as possible in the final version, including changes in our model architecture, scaling law and mid-training experiments, and others.\\n\\nRegarding your question, we agree that an analysis of the gap between reporting requirements in machine learning and other resource-intensive fields would be quite valuable. While we do not have a satisfactory answer to your question now, it is an important point and we will explicitly note this in the paper as a promising area of future research.\"}", "{\"metareview\": \"The paper evaluates the end-to-end environmental impact of training LLMs, including hardware manufacturing, model development, final training runs, and deployment. The study accounts for a range of parameters. The take aways are substantiated experimentally. Raising awareness of the conclusions is of general importance and sharing the results will benefit the community. As indicated by one of the reviewers, it is highly valuable to have this data published. The reviewers also suggested a number of ways to improve the manuscript. I would encourage the authors to incorporate these elements in the final version of the paper as it will further strengthen it.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers found the additional clarifications provided by the authors informative. No concerns remained. None of the insights resulting from this work were deemed controversial and found useful to share with the community.\"}", "{\"comment\": \"Thank you for reviewing our work, and for your suggestions.\", \"w1\": \"Regarding novelty, we would like to push back on this point. We do not claim to have developed a novel methodology for calculating the environmental impact of training language models. Instead, we aim to set a new standard for reporting the total environmental impact, and encourage other developers in the community to meet this standard going forward. We aim to provide a comprehensive, holistic evaluation, in contrast with many recent technical reports that only evaluate carbon emissions, assume GPUs are always operating at 100% of their maximum power draw, and only report training costs. In other words, we aim to show that this level of detail is feasible to report, and we encourage others to do so as well.\", \"w2\": \"Regarding model size and generalizability, we agree that different model architectures and training setups (hardware, data center locations, etc) would have an impact on the downstream environmental impact. However, the calculations we perform hold at all model sizes, and location-specific variables (PUE, WUE, etc) can be substituted as necessary.\\n\\nW3 is discussed in our manuscript in both the relevant methods and results subsections, 3.4 and 4.2. To reiterate: Regarding deployment in real world scenarios, we agree that our estimates of the cost of model deployment are limited in comparison to real-world data. However, as we state in the paper, we do not host our own models, and thus do not have access to real world data. Instead, our estimates aim to show potential impact, and we encourage those hosting models at large scales to share similar analyses with their own real-world data in the future. In general, we do not report our precise deployment simulation numbers as part of any central claim we make; instead, we include these results to contextualize the relative costs of training and deployment.\\n\\nW4 is also discussed in our manuscript (see 5.1, in our paragraph titled \\u201cEmbodied emissions are still an enigma.\\u201d). To reiterate: Regarding embodied emissions, we agree that our estimates likely are not 100% accurate, as we state in the paper. Instead, we aim to provide a better estimate of the embodied emissions compared to previous work, and to highlight how little information regarding embodied emissions is publicly available, which we discuss in Section 5. Additionally, we make many efforts to obtain real information and estimates from our providers (including contacting our data center providers), and we make reasonable, conservative assumptions about information that we were not able to obtain. We aim to be very careful highlighting the aspects of our estimates that are based on assumptions vs. \\u201creal\\u201d data.\", \"w5\": \"Regarding previously reported environmental impacts, can you explain more about what you mean by \\u201creplicating their results,\\u201d with regards to OLMo\\u2019s and Llama\\u2019s carbon emissions? They have not released power consumption data, and thus we must instead take their reported numbers at face value. You do raise a good point though, and we will include estimates of the water consumption *as if their models were trained in our data centers*, as we do not have access to location information for their training runs. In the final version, we will also add comparisons with other models in the deployment estimate section, such as Qwen 2.5.\", \"w6\": \"Regarding model size, we disagree that 7 billion parameter models are not representative of the impact of training larger models. Especially with the growing popularity of deployment-optimized models (such as Gemini Flash, Claude Haiku, GPT-4o mini, etc), we believe that smaller models are only becoming more popular in deployment, especially for on-device settings. However, we have also recently completed training a 13 billion parameter model. We can report that the 13B model, trained to 4 trillion tokens, required about 290 MWh (vs ~157 for the 7B trained to 4T tokens), showing an almost exactly linear trend in training costs as model size grows. We are still calculating other costs for this model, but we will include the full results in the final version.\", \"to_answer_your_questions\": [\"In the OLMo paper (https://arxiv.org/pdf/2402.00838), they released two separate 7B models, and reported the carbon emissions from both models separately, as they were trained on different hardware in different clusters. We compare against both OLMo models in our paper, but we will make it more clear that these are separate models in the final version.\", \"We would like to emphasize that training models at the 7B parameter scale and higher is very expensive, in terms of compute, environmental impact, and money. We are training our models to between 2 and 4 trillion tokens, so scaling beyond 7B is very expensive. However, as mentioned above, we have since trained a 13 billion parameter model, and we will include the environmental impact of training this model (also above) in the final version of the paper.\"]}", "{\"comment\": \"I thank the authors for their thorough responses and for addressing most of my concerns. While I appreciate the effort and clarity provided, I still have a couple of additional questions and suggestions for consideration:\\n\\nW1\\u2014Thank you again for clarifying your focus on setting a new reporting standard rather than developing a novel methodology. I appreciate the emphasis on providing a comprehensive evaluation and encouraging the community to adopt this level of detail in their analyses. That said, I wonder how you differentiate this work from being categorized as a detailed technical report for your private model, particularly since you do not claim methodological novelty, the equations used are based on prior works, and much of the data of other models rely on referencing other technical reports.\\n\\nW4\\u2014 Indeed, it would be highly beneficial if you highlight and distinguish the real vs simulated data. \\n\\nW5\\u2014 I understand that the previous environments did not report their power consumption data, and I appreciate your acknowledgment of this limitation. My point was primarily to highlight this gap. That said, I believe that retraining some of these models on X tokens under your settings, if feasible, could provide valuable estimates to address this gap and offer further insights into the potential environmental impacts under consistent conditions.\\n\\nFurthermore, if the authors include water consumption estimates based on their own data center conditions, it will be crucial to clearly distinguish these estimates in the manuscript to avoid any misinterpretation or association with the original reported results. This clarity will be key to maintaining accuracy. Additionally, I highly appreciate the inclusion of new models and look forward to seeing how they contribute to the broader context of your analysis.\\n\\nW6\\u2014Including results from your 13B parameter model in the final version will certainly bolster the generalizability of your claims about training costs across model sizes. It may also help to explicitly highlight this linear trend in training costs in the methods or results sections to strengthen your argument for scaling costs.\"}", "{\"summary\": \"This paper provides estimates and insights into power and water consumption during training and inference of Large Language Models (LLMs). Many of these estimates are based on real experiments in training these models of different parameter sizes. Some are rough estimates for cases where experiments were not possible, such as GPU manufacturing. The paper highlights that there are side activities such as data center cooling, development of the model etc. which are not accounted for in the literature reporting carbon emissions from model training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper raises very important concerns about transparency in the area of energy and water consumption required for developing as well as using LLMs.\\n2. This paper includes aspects of this process which have not been reported before such as hyper-parameter tuning and manufacturing of GPUs.\\n3. The discussion around these calculations is useful for others to understand the environmental implications of doing AI research.\", \"weaknesses\": \"1. While some of the calculations are clear and seem reproducible as long as some of the manufacturer specific quantities are known, I am not 100% certain if all the steps can be followed by others. It would be useful if the authors can confirm whether someone can apply their methodology for similar experiments/calculations and if the paper contains all the details needed to do so.\\n2. It would be helpful if the `Development` part of Section 4.1 can provide more details of what it covers i.e. what kind of HPO, what was the search space, what scaling experiments etc.\", \"minor\": \"more transparency from developers on when, where, and how they a;re training their models -> more transparency from developers on when, where, and how they are training their models\", \"questions\": \"I would think similar studies are done for other fields such as electric vehicles. Are there better regulations for reporting in those? What are some other parallels and what other gaps can we find in transparency compared to them?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their response.\"}", "{\"comment\": \"W5: Retraining other models would be an infeasible experiment to add on to this paper, unfortunately. First, only OLMo has released their exact pretraining data, so replicating Llama\\u2019s pretraining is not possible, even ignoring potential access issues to similar training code, hardware, etc. Training other 7B models for even 1% of our maximum token count would cost roughly $50,000, based on conservative estimates of GPU hour market prices (https://www.hyperstack.cloud/gpu-pricing), which is too expensive for a single experiment for nearly all organizations. We instead compare many model sizes using the same architecture, trained to similar token counts, to show environmental impacts under otherwise consistent conditions.\\n\\nAdditionally, running these experiments using our setup would only allow us to answer a single research question: how much impact does the architecture have? While this is an interesting research question, to appropriately answer it we would recommend significant experimentation beyond the models we compare against, and careful ablations changing each part of the architectures, which we leave to future work.\", \"w1\": \"To clarify, we do claim novelty in our work: we are the first to report all of (i) electricity consumption, (ii) CO2 emissions, and (iii) water consumption at three points in the machine learning pipeline: early model development, training, and inference. At the time of submission, our work is the state of the art with respect to estimation of resource consumption from training language models.\\n\\nWith respect to concerns that \\u201c*the equations used are based on prior works*\\u201d, we chose to use the equations we did because these are the current best practice methods for calculating the environmental impact of model training given the information we have. For example, to report CO2 emissions, our Equation 2 uses power consumption, PUE, and grid carbon intensity. This equation provides the best estimate with the data we have, and if we had changed it we would no longer have the best estimate of CO2 emissions. Our work uses best practice throughout, and thus we intend for it to act as a reference for future researchers on how to report on the environmental impact throughout the machine learning pipeline. We hope that this convinces the reviewer that using the correct equations for reporting the most relevant information is not a reason to decrease the score of the paper.\\n\\n> *detailed technical report for your private model*\\n\\nTo be clear, our models are not private, and will be publicly released by the time of final publication. We have withheld identifying information in order to preserve anonymity, but we will provide links to all of our models in the final version of the paper.\\n\\nTo reiterate, as the reviewer acknowledges in the followup response, we believe that one of the key contributions of our work is to set a new standard for reporting of resource consumption during training of language models. Our hope is that our work will prompt others to report with a similar or greater amount of detail and care, and we believe it meets the standards for novelty and impact for a conference publication.\", \"w4\": \"To be clear, we already do work to distinguish between real and simulated data (e.g. Section 4.2, *Simulating Deployment & Inference*), but we will make sure that there is no confusion throughout. We\\u2019re also happy to provide extra clarity now if the reviewer feels there are any specific parts of the manuscript that are unclear.\", \"w6\": \"Thank you for the suggestion. To reiterate, we have already established a linear trend across three orders of magnitude, with seven model sizes. We will include the 4th order of magnitude/8th model size (the 13B model), but we believe that our results stand on their own already.\\n\\nMore generally, the reviewer mentioned clarity multiple times. Is there a particular part of the submission that they feel could have been written more clearly, other than the meaning of the 80% figure that we have already discussed with Reviewer zfmC?\"}", "{\"summary\": \"This paper estimates the real-world environmental impact of training LLMs, including hardware manufacturing, model development, final training runs, and deployment. They vary model sizes, model architectures, and training time, providing a first view of the embodied CO2 production and water usage from AI systems at these scales. The entire study results in a production of 270t CO2 and a usage of 1.1M liters of water.\\n\\nAdditionally, it finds that model development accounts for 80% of the environmental impact, which is often not stated in related work. Additionally, they find that power draw has highs and lows due to checkpointing, creating a mixed load on the electrical grid, which may not easily be addressed, resulting in control challenges for power providers.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"S1: This paper provides the first-ever comprehensive end-to-end view of the environmental impact of an LLM. It is highly valuable to have this data published, as we, as a research community, need to have a complete view of how current training and deployment practices affect CO2 production and water usage. The insights and data provided can be used as a building block to argue future performance improvements not only for $ cost reasons but also to quantify their environmental impact.\", \"s2\": \"The authors take care to question current naive assumptions like a 100% power draw during training, making this paper stand out. While this is a low bar, research on environmental impact in LLM training has had its share of invalidated research due to these minor, overlooked details.\", \"s3\": \"The authors estimate the water usage during the development of an LLM, which I have not seen before in this line of research. This adds a new dimension to the environmental impact, providing a more complete picture of how current AI practices affect our environment.\", \"weaknesses\": \"W1: The result of model development being a large chunk of the environmental impact is not too surprising, but I agree that it is important to track and present in this paper. I am wondering about the representativeness of the data presented in this paper for model development and whether we will see a similar trend continue in the future. Given that this is a key contribution outlined in the abstract, I question whether the number of 80% will change significantly in future related work and if there are steps to take to present this more confidently. I am afraid that researchers in related fields take the final training costs and multiply them by 5x due to the results in this paper.\", \"w2\": \"The second point of discussion has a similar issue as W1. While I agree that oscillating power draw may be a problem for power providers, I hesitate to agree that this is an issue at large. GPU-memory checkpointing has been shown to be possible by Gemini (https://arxiv.org/pdf/2312.11805), which likely reduces this time to sub-seconds. I am not against keeping this insight in general, and that power draw may be an issue for future techniques, but I question the future-proofness of this discussion point. Also, this being a problem for power providers could be explained in more detail and what this implies for the environmental impact.\\n\\nW3 (minor issues):\\n* The EU AI Act could be included in 5.1 as it also includes the environmental impact of AI systems (e.g. Art 95 https://artificialintelligenceact.eu/article/95/)\\n* Figure 1 takes a lot of space for the amount of information it provides. A double y-axis may not be the best visualization as it makes it harder to initially grasp the information. Maybe using two scatter plots would make the visualization more compact and easier to understand?\", \"questions\": \"I would like the authors to address W1 and W2, whether they agree or not, and if they do, how they plan to address them.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your actionable suggestions! They are very helpful.\", \"w1\": \"Thank you, we will make sure the final version makes this point clearly, and we will include a discussion of our confidence in the generalizability of our results with regards to model development costs.\", \"w2\": \"Thank you for outlining your argument! We will incorporate your line of reasoning in the final version. We would also like to add that even as checkpointing improves, there are other potential causes of large-scale power demand changes, such as waiting for collective communications to finish, as noted by the Llama report. Taken together, these highlight the need for continued improvements in both private and publicly available model training infrastructure, to ensure that solutions to these potentially common problems are able to be used by the broader community.\\n\\nThank you as well for raising your score!\"}", "{\"title\": \"Reviewer zfmC Answer to the Authors\", \"comment\": \"Dear authors,\\n\\nThank you for considering my proposals to improve the paper!\\n\\nRegarding W1), thanks for that clarification of my initial misunderstanding of 1.8x rather than 5x of the cost. Improving the wording will definitely help, and I'm eager to read the updated version. Please take care to frame the results with the exact confidence you have and showcase the limitations in your assumptions such that this results becomes as future-proof as possible.\\n\\nRegarding W2), thanks for including the reference and the work on GPU-memory based checkpointing. Again, I only hesitantly agree with the way this argument is currently structured. While Meta's Llama 405B is a solid argument for now (good engineers, large investments, etc.), I very much expect this to be one the last models using checkpointing in such a fashion (at least with this kind of frequency). With the way DL progresses right now, this might be outdated by next year and it would be shame if one of the paper's key conclusions becomes obsolete.\", \"let_me_outline_how_i_would_argue_this_for_the_near_term_future_proof_ness\": \"1) Memory-based checkpointing does not mitigate the risk of catastrophic failures (rack-wide energy loss, earthquakes, spine-level networking equipment breaking down, basically any kind of correlated failures).\\n2) Checkpointing is not only for saving progress to account for failures, but also to restart training in case updates are needed (e.g., updating the learning rate based on some signal).\\n3) Therefore, even with memory-based checkpointing, this does not absolve the need for storage-based checkpointing. Even if live changes to the training algorithm become the norm (2), issue (1) is likely to not solve-able without some kind of progress saving. \\n\\nFor the long-term future, this is still looking pretty rough, as asynchronous checkpointing can become the norm soon, especially if it is solely used to limit the impact of (1). Check out the future work section by IBM + PyTorch here https://pytorch.org/blog/reducing-checkpointing-times/ from June 2024. \\n\\nAll in all, this is a solid contribution for today's checkpointing standards if explained in more detail as the authors suggested. I still would love some more information on how this is an issue for the power providers and why this it is hard for them to account for spikes that could be announced beforehand, but maybe this is a bit out of scope of this conference's target audience. Feel free to incorporate my proposed line of argumentation in your final version or showcase that I am wrong here.\", \"edit\": \"I've changed my score to 8 (accept, good paper).\"}", "{\"summary\": \"This paper discusses the impact of large language models on the environment by studying the costs associated with training and deploying them. It explores the hidden cost of training a model, particularly when it comes to hardware manufacturing and the pre-training steps of creating a model, along with the training and deploying costs. They run their experiments on a small set of models with parameter sizes ranging between 20 million and 7 billion parameters. The results show that models released 270 metric tons of carbon emissions and 1.137 million liters of water. Additionally, the author discusses the power fluctuation during training that is a result of model checkpointing.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The discussion of energy consumption and costs incurred by training and deploying LLMs is a rising and highly relevant topic.\", \"this_paper_has_the_following_strengths\": \"1. It is one of the first studies to report the environmental impact of model development, not just the final training runs, highlighting the significant but often overlooked costs associated with hyperparameter tuning and other development activities.\\n2. Cost evaluation encompasses power consumption, carbon emissions, and water usage and is not confined to a single aspect.\\n3. Reporting the costs and putting them into perspective with real-world equivalencies.\", \"weaknesses\": \"Even though the topic is highly interesting he are some limitations to the paper:\\n1. Lack of novelty: The issue of power consumption in LLMs has been widely studied, and this paper doesn't provide any additional ideas, metrics, or insights expect for the study development cost.\\n2. The findings are based on a specific set of small models, which may limit the generalizability of the results to other models and data centers with different configurations and efficiencies\\n3. The study does not include data from actual deployment and usage of the models, relying instead on simulated scenarios, which may not fully reflect the actual environmental costs. In fact, the paper has a limited set of inference simulations with very simplistic assumptions, which may not fully capture the real-world deployment scenarios and their environmental impacts. \\n4. Some of the calculations rely on assumptions and estimates, particularly regarding the embodied emissions and water consumption of hardware manufacturing, which may not be entirely accurate.\\n5. Limited comparison across models: The authors seem to have taken the carbon consumption of llama and OLMo in Table 2 from previous works without replicating results, which meant no water usage comparison for training. For deployment, they only compare with Llama.\\n6. Given the small sizes of the models, the paper lacks an analysis of how their results scale to larger models.\", \"questions\": \"1. In table 2, why is OLMo reported twice is there a difference?\\n2. I am curious to know how much time it took to train your models. Given the hardware resources you had (up to 64 HGX servers with h100), why was your study limited to 7 billion parameter models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for the clarifications. Everything is clear to me, but I encourage the authors to ensure the manuscript reflects previous comments fully and that there is no room for confusion, particularly around distinguishing between real and simulated data.\\n\\nWhile I still believe the work primarily reads as a detailed technical report of a specific model that focuses on environmental factors, I recognize that the community could benefit from the practices you propose for detailed reporting of resource consumption at multiple stages of the training/inference pipeline.\\nBased on this, I have raised my score to 6. I hope this feedback is helpful, and I look forward to seeing the final version of the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Thank you for reviewing our work, and your thoughtful comments and suggestions.\\n\\nRegarding development costs, we agree that the wording of our \\u201880%\\u2019 result could be made clearer \\u2013 we will make sure to clarify that our reported development costs amounted to 80% of the final training run costs, rather than 80% of the total costs, such that in our case, the total costs were 1.8x (not 5x) the final runs\\u2019 costs \\u2013 we hope that with more careful wording, people will not naively apply a 5x multiplier to other organizations\\u2019 reported final run costs. Though we do not go as far as to claim that our 80% figure is generalizable, we believe that it is a reasonable and roughly representative (conservative) measurement that we hope can inform other researchers and practitioners who set out to develop their own models.\\n\\n\\nWith regards to the oscillating power draw, we agree that there are methods of improving training infrastructure to improve the overall process of saving checkpoints, and ideally reducing the impact of major fluctuations in power consumption (though, we note that while GPU-memory checkpointing may reduce the amount of time taken to save a checkpoint, it does not remove it entirely; to do so would instead require a fully asynchronous checkpointing method). However, even some of the largest and best resourced model developers still encounter problems with power fluctuations during checkpointing. For example, the Llama 3 report (https://arxiv.org/pdf/2407.21783, page 14) states:\\n\\n> *During training, tens of thousands of GPUs may increase or decrease power consumption at the same time, for example, due to all GPUs waiting for checkpointing or collective communications to finish, or the startup or shutdown of the entire training job. When this happens, it can result in instant fluctuations of power consumption across the data center on the order of tens of megawatts, stretching the limits of the power grid. This is an ongoing challenge for us as we scale training for future, even larger Llama models.*\\n\\nThat being said, the reviewer raises a good point, and we will make this argument more clearly in the final version, and more explicitly advocate for improved public model training infrastructure, so the broader ecosystem of smaller developers can take advantage of techniques such as GPU-memory checkpointing (we will also include your recommended citation in the final version).\\n\\nRegarding your final suggestions, thank you for pointing out that very relevant section of the EU AI Act. We will be sure to include this in the final version. We will also take your suggestions into account for Figure 1; we were planning to make a few improvements to it already, and I believe your suggestions will also help improve it.\"}" ] }